Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
NASA Astrophysics Data System (ADS)
Ladriere, J.
1992-04-01
The thermal decompositions of K3Fe(ox)3 3 H2O and K2Fe(ox)2 2 H2O in nitrogen have been studied using Mössbauer spectroscopy, X-ray diffraction and thermal analysis methods in order to determine the nature of the solid residues obtained after each stage of decomposition. Particularly, after dehydration at 113°C, the ferric complex is reduced into a ferrous compound, with a quadrupole splitting of 3.89 mm/s, which corresponds to the anhydrous form of K2Fe(ox)2 2 H2O.
Water-splitting using photocatalytic porphyrin-nanotube composite devices
Shelnutt, John A [Tijeras, NM; Miller, James E [Albuquerque, NM; Wang, Zhongchun [Albuquerque, NM; Medforth, Craig J [Winters, CA
2008-03-04
A method for generating hydrogen by photocatalytic decomposition of water using porphyrin nanotube composites. In some embodiments, both hydrogen and oxygen are generated by photocatalytic decomposition of water.
CrossTalk: The Journal of Defense Software Engineering. Volume 27, Number 1, January/February 2014
2014-02-01
deficit in trustworthiness and will permit analysis on how this deficit needs to be overcome. This analysis will help identify adaptations that are...approaches to trustworthy analysis split into two categories: product-based and process-based. Product-based techniques [9] identify factors that...Criticalities may also be assigned to decompositions and contributions. 5. Evaluation and analysis : in this task the propagation rules of the NFR
NASA Astrophysics Data System (ADS)
Sronsri, Chuchai; Boonchom, Banjong
2018-04-01
A simple precipitating method was used to synthesize effectively a partially metal-doped phosphate hydrate (Mn0.9Mg0.1HPO4·3H2O), whereas the thermal decomposition process of the above hydrate precursor was used to obtain Mn1.8Mg0.2P2O7 and LiMn0.9Mg0.1PO4 compounds under different conditions. To separate the overlapping thermal decomposition peak, a deconvolution technique was used, and the separated peak was applied to calculate the water content. The factor group splitting analysis was used to exemplify their vibrational spectra obtained from normal vibrations of HPO42-, H2O, P2O74- and PO43- functional groups. Further, the deconvoluted bending mode of water was clearly observed. Mn0.9Mg0.1HPO4·3H2O was observed in the orthorhombic crystal system with the space group of Pbca (D2h15). The formula units per unit cell were found to be eight (Z = 8), and the site symmetric type of HPO42- was observed as Cs. For the HPO42- unit, the correlation filed splitting analysis of type C3v - Cs - D2h15 was calculated and had 96 internal modes, whereas H2O in the above hydrate was symbolized as C2v - Cs - D2h15 and had 24 modes. The symbol C2v - Cs - C2h3 was used for the correlation filed splitting analysis of P2O74- in Mn1.8Mg0.2P2O7 (monoclinic, C2/m (C2h3), Z = 2, and 42 modes). Finally, the symbol Td - Cs - D2h16 was used for the correlation filed splitting analysis of PO43- in LiMn0.9Mg0.1PO4 (orthorhombic, Pnma (D2h16), Z = 4, and 36 modes).
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1978-01-01
The paper describes the split-Cholesky strategy for banded matrices arising from the large systems of equations in certain fluid mechanics problems. The basic idea is that for a banded matrix the computation can be carried out in pieces, with only a small portion of the matrix residing in core. Mesh considerations are discussed by demonstrating the manner in which the assembly of finite element equations proceeds for linear trial functions on a triangular mesh. The FORTRAN code which implements the out-of-core decomposition strategy for banded symmetric positive definite matrices (mass matrices) of a coupled initial value problem is given.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
Field by field hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1993-01-01
A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.
Isotopic determination of uranium in soil by laser induced breakdown spectroscopy
Chan, George C. -Y.; Choi, Inhee; Mao, Xianglei; ...
2016-03-26
Laser-induced breakdown spectroscopy (LIBS) operated under ambient pressure has been evaluated for isotopic analysis of uranium in real-world samples such as soil, with U concentrations in the single digit percentage levels. The study addresses the requirements for spectral decomposition of 235U and 238U atomic emission peaks that are only partially resolved. Although non-linear least-square fitting algorithms are typically able to locate the optimal combination of fitting parameters that best describes the experimental spectrum even when all fitting parameters are treated as free independent variables, the analytical results of such an unconstrained free-parameter approach are ambiguous. In this work, five spectralmore » decomposition algorithms were examined, with different known physical properties (e.g., isotopic splitting, hyperfine structure) of the spectral lines sequentially incorporated into the candidate algorithms as constraints. It was found that incorporation of such spectral-line constraints into the decomposition algorithm is essential for the best isotopic analysis. The isotopic abundance of 235U was determined from a simple two-component Lorentzian fit on the U II 424.437 nm spectral profile. For six replicate measurements, each with only fifteen laser shots, on a soil sample with U concentration at 1.1% w/w, the determined 235U isotopic abundance was (64.6 ± 4.8)%, and agreed well with the certified value of 64.4%. Another studied U line - U I 682.691 nm possesses hyperfine structure that is comparatively broad and at a significant fraction as the isotopic shift. Thus, 235U isotopic analysis with this U I line was performed with spectral decomposition involving individual hyperfine components. For the soil sample with 1.1% w/w U, the determined 235U isotopic abundance was (60.9 ± 2.0)%, which exhibited a relative bias about 6% from the certified value. The bias was attributed to the spectral resolution of our measurement system - the measured line width for this U I line was larger than its isotopic splitting. In conclusion, although not the best emission line for isotopic analysis, this U I emission line is sensitive for element analysis with a detection limit of 500 ppm U in the soil matrix; the detection limit for the U II 424.437 nm line was 2000 ppm.« less
Sronsri, Chuchai; Boonchom, Banjong
2018-04-05
A simple precipitating method was used to synthesize effectively a partially metal-doped phosphate hydrate (Mn 0.9 Mg 0.1 HPO 4 ·3H 2 O), whereas the thermal decomposition process of the above hydrate precursor was used to obtain Mn 1.8 Mg 0.2 P 2 O 7 and LiMn 0.9 Mg 0.1 PO 4 compounds under different conditions. To separate the overlapping thermal decomposition peak, a deconvolution technique was used, and the separated peak was applied to calculate the water content. The factor group splitting analysis was used to exemplify their vibrational spectra obtained from normal vibrations of HPO 4 2- , H 2 O, P 2 O 7 4- and PO 4 3- functional groups. Further, the deconvoluted bending mode of water was clearly observed. Mn 0.9 Mg 0.1 HPO 4 ·3H 2 O was observed in the orthorhombic crystal system with the space group of Pbca (D 2h 15 ). The formula units per unit cell were found to be eight (Z = 8), and the site symmetric type of HPO 4 2- was observed as C s . For the HPO 4 2- unit, the correlation filed splitting analysis of type C 3v - C s - D 2h 15 was calculated and had 96 internal modes, whereas H 2 O in the above hydrate was symbolized as C 2v - C s - D 2h 15 and had 24 modes. The symbol C 2v - C s - C 2h 3 was used for the correlation filed splitting analysis of P 2 O 7 4- in Mn 1.8 Mg 0.2 P 2 O 7 (monoclinic, C2/m (C 2h 3 ), Z = 2, and 42 modes). Finally, the symbol T d - C s - D 2h 16 was used for the correlation filed splitting analysis of PO 4 3- in LiMn 0.9 Mg 0.1 PO 4 (orthorhombic, Pnma (D 2h 16 ), Z = 4, and 36 modes). Copyright © 2018 Elsevier B.V. All rights reserved.
Hybrid Upwind Splitting (HUS) by a Field-by-Field Decomposition
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1995-01-01
We introduce and develop a new approach for upwind biasing: the hybrid upwind splitting (HUS) method. This original procedure is based on a suitable hybridization of current prominent flux vector splitting (FVS) and flux difference splitting (FDS) methods. The HUS method is designed to naturally combine the respective strengths of the above methods while excluding their main deficiencies. Specifically, the HUS strategy yields a family of upwind methods that exhibit the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the resolution of linear waves. We give a detailed construction of the HUS methods following a general and systematic procedure directly performed at the basic level of the field by field (i.e. waves) decomposition involved in FDS methods. For such a given decomposition, each field is endowed either with FVS or FDS numerical fluxes, depending on the nonlinear nature of the field under consideration. Such a design principle is made possible thanks to the introduction of a convenient formalism that provides us with a unified framework for upwind methods. The HUS methods we propose bring significant improvements over current methods in terms of accuracy and robustness. They yield entropy-satisfying approximate solutions as they are strongly supported in numerical experiments. Field by field hybrid numerical fluxes also achieve fairly simple and explicit expressions and hence require a computational effort between that of the FVS and FDS. Several numerical experiments ranging from stiff 1D shock-tube to high speed viscous flows problems are displayed, intending to illustrate the benefits of the present approach. We assess in particular the relevance of our HUS schemes to viscous flow calculations.
Time-frequency characterisation of paediatric heart sounds
NASA Astrophysics Data System (ADS)
Leung, Terence Sze-Tat
1998-08-01
The operation of the heart can be monitored by the sounds it emits. Structural defects or malfunction of the heart valves will cause additional abnormal sounds such as murmurs and ejection clicks. This thesis aims to characterise the heart sounds of three groups of children who either have an Atrial Septal Defect (ASD), a Ventricular Septal Defect (VSD), or are normal. Two aspects of heart sounds have been specifically investigated; the time-frequency analysis of systolic murmurs and the identification of splitting patterns in the second heart sound. The analysis is based on 42 paediatric heart sound recordings. Murmurs are sounds generated by turbulent flow of blood in the heart. They can be found in patients with both pathological and non-pathological conditions. The acoustic quality of the murmurs generated in each heart condition are different. The first aspect of this work is to characterise the three types of murmurs in the time- frequency domain. Modern time-frequency methods including, the Wigner-Ville Distribution, Smoothed Pseudo Wigner-Ville Distribution, Choi-Williams Distribution and spectrogram have been applied to characterise the murmurs. It was found that the three classes of murmurs exhibited different signatures in their time-frequency representations. By performing Discriminant Analysis, it was shown that spectral features extracted from the time- frequency representations can be used to distinguish between the three classes. The second aspect of the research is to identify splitting patterns in the second heart sound, which consists of two acoustic components due to the closure of the aortic valve and pulmonary valve. The aortic valve usually closes before the pulmonary valve, introducing a time delay known as 'split'. The split normally varies in duration over the respiratory cycle. In certain pathologies such as the ASD, the split becomes fixed over the respiration cycle. A technique based on adaptive signal decomposition is developed to measure the split and hence to identify the splitting pattern as either 'variable' or 'fixed'. This work has successfully characterised the murmurs and splitting patterns in the three groups of patients. Features extracted can be used for diagnostic purposes.
Du, Shichao; Ren, Zhiyu; Zhang, Jun; Wu, Jun; Xi, Wang; Zhu, Jiaqing; Fu, Honggang
2015-05-11
A large-area, self-supported Co3O4 nanocrystal/carbon fiber electrode for oxygen and hydrogen evolution reaction was fabricated via thermal decomposition of the [Co(NH3)n](2+)-oleic acid complex and subsequent spray deposition. Due to the exposed active sites and good electrical conductivity, its operate voltage for overall water splitting is nearly the same as commercial Pt/C.
Thermochemical generation of hydrogen and carbon dioxide
NASA Technical Reports Server (NTRS)
Lawson, Daniel D. (Inventor); England, Christopher (Inventor)
1984-01-01
Mixing of carbon in the form of high sulfur coal with sulfuric acid reduces the temperature of sulfuric acid decomposition from 830.degree. C. to between 300.degree. C. and 400.degree. C. The low temperature sulfuric acid decomposition is particularly useful in thermal chemical cycles for splitting water to produce hydrogen. Carbon dioxide is produced as a commercially desirable byproduct. Lowering of the temperature for the sulfuric acid decomposition or oxygen release step simplifies equipment requirements, lowers thermal energy input and reduces corrosion problems presented by sulfuric acid at conventional cracking temperatures. Use of high sulfur coal as the source of carbon for the sulfuric acid decomposition provides an environmentally safe and energy efficient utilization of this normally polluting fuel.
Parallel processing for pitch splitting decomposition
NASA Astrophysics Data System (ADS)
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
Kinematics of reflections in subsurface offset and angle-domain image gathers
NASA Astrophysics Data System (ADS)
Dafni, Raanan; Symes, William W.
2018-05-01
Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei
2010-01-01
This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.
Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains
NASA Astrophysics Data System (ADS)
Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes
2018-03-01
We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.
Method for increasing steam decomposition in a coal gasification process
Wilson, Marvin W.
1988-01-01
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
Method for increasing steam decomposition in a coal gasification process
Wilson, M.W.
1987-03-23
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
The relationship between two fast/slow analysis techniques for bursting oscillations
Teka, Wondimu; Tabak, Joël; Bertram, Richard
2012-01-01
Bursting oscillations in excitable systems reflect multi-timescale dynamics. These oscillations have often been studied in mathematical models by splitting the equations into fast and slow subsystems. Typically, one treats the slow variables as parameters of the fast subsystem and studies the bifurcation structure of this subsystem. This has key features such as a z-curve (stationary branch) and a Hopf bifurcation that gives rise to a branch of periodic spiking solutions. In models of bursting in pituitary cells, we have recently used a different approach that focuses on the dynamics of the slow subsystem. Characteristic features of this approach are folded node singularities and a critical manifold. In this article, we investigate the relationships between the key structures of the two analysis techniques. We find that the z-curve and Hopf bifurcation of the two-fast/one-slow decomposition are closely related to the voltage nullcline and folded node singularity of the one-fast/two-slow decomposition, respectively. They become identical in the double singular limit in which voltage is infinitely fast and calcium is infinitely slow. PMID:23278052
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
Forcing function modeling for flow induced vibration
NASA Technical Reports Server (NTRS)
Fleeter, Sanford
1993-01-01
The fundamental forcing function unsteady aerodynamics for application to turbomachine blade row forced response are considered, accomplished through a series of experiments performed in a rotating annular cascade and a research axial flow turbine. In particular, the unsteady periodic flowfields downstream of rotating rows of perforated plates, airfoils and turbine blade rows are measured with a cross hot-wire and an unsteady total pressure probe. The unsteady velocity and static pressure fields were then analyzed harmonically and split into vortical and potential gusts, accomplished by developing a gust splitting analysis which includes both gust unsteady static pressure and velocity data. The perforated plate gusts closely were found to be linear theory vortical gusts, satisfying the vortical gust constraints. The airfoil and turbine blade row generated velocity perturbations did not satisfy the vortical gust constraints. However, the decomposition of the unsteady flow field separated the data into a propagating vortical component which satisfied these vortical gust constraints and a decaying potential component.
Adaptive Decomposition of Highly Resolved Time Series into Local and Non‐local Components
Highly time-resolved air monitoring data are widely being collected over long time horizons in order to characterizeambient and near-source air quality trends. In many applications, it is desirable to split the time-resolved data into two ormore components (e.g., local and region...
McIntosh, Craig S; Dadour, Ian R; Voss, Sasha C
2017-05-01
The rate of decomposition and insect succession onto decomposing pig carcasses were investigated following burning of carcasses. Ten pig carcasses (40-45 kg) were exposed to insect activity during autumn (March-April) in Western Australia. Five replicates were burnt to a degree described by the Crow-Glassman Scale (CGS) level #2, while five carcasses were left unburnt as controls. Burning carcasses greatly accelerated decomposition in contrast to unburnt carcasses. Physical modifications following burning such as skin discolouration, splitting of abdominal tissue and leathery consolidation of skin eliminated evidence of bloat and altered microambient temperatures associated with carcasses throughout decomposition. Insect species identified on carcasses were consistent between treatment groups; however, a statistically significant difference in insect succession onto remains was evident between treatments (PERMANOVA F (1, 224) = 14.23, p < 0.01) during an 8-day period that corresponds with the wet stage of decomposition. Differences were noted in the arrival time of late colonisers (Coleoptera) and the development of colonising insects between treatment groups. Differences in the duration of decomposition stages and insect assemblages indicate that burning has an effect on both rate of decomposition and insect succession. The findings presented here provide baseline data for entomological casework involving burnt remains criminal investigations.
Breast tissue decomposition with spectral distortion correction: A postmortem study
Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee
2014-01-01
Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
The GA sulfur-iodine water-splitting process - A status report
NASA Astrophysics Data System (ADS)
Besenbruch, G. E.; Chiger, H. D.; McCorkle, K. H.; Norman, J. H.; Rode, J. S.; Schuster, J. R.; Trester, P. W.
The development of a sulfur-iodine thermal water splitting cycle is described. The process features a 50% thermal efficiency, plus all liquid and gas handling. Basic chemical investigations comprised the development of multitemperature and multistage sulfuric acid boost reactors, defining the phase behavior of the HI/I2/H2O/H3PO4 mixtures, and development of a decomposition process for hydrogen iodide in the liquid phase. Initial process engineering studies have led to a 47% efficiency, improvements of 2% projected, followed by coupling high-temperature solar concentrators to the splitting processes to reduce power requirements. Conceptual flowsheets developed from bench models are provided; materials investigations have concentrated on candidates which can withstand corrosive mixtures at temperatures up to 400 deg K, with Hastelloy C-276 exhibiting the best properties for containment and heat exchange to I2.
The GA sulfur-iodine water-splitting process - A status report
NASA Technical Reports Server (NTRS)
Besenbruch, G. E.; Chiger, H. D.; Mccorkle, K. H.; Norman, J. H.; Rode, J. S.; Schuster, J. R.; Trester, P. W.
1981-01-01
The development of a sulfur-iodine thermal water splitting cycle is described. The process features a 50% thermal efficiency, plus all liquid and gas handling. Basic chemical investigations comprised the development of multitemperature and multistage sulfuric acid boost reactors, defining the phase behavior of the HI/I2/H2O/H3PO4 mixtures, and development of a decomposition process for hydrogen iodide in the liquid phase. Initial process engineering studies have led to a 47% efficiency, improvements of 2% projected, followed by coupling high-temperature solar concentrators to the splitting processes to reduce power requirements. Conceptual flowsheets developed from bench models are provided; materials investigations have concentrated on candidates which can withstand corrosive mixtures at temperatures up to 400 deg K, with Hastelloy C-276 exhibiting the best properties for containment and heat exchange to I2.
Geometric decompositions of collective motion
NASA Astrophysics Data System (ADS)
Mischiati, Matteo; Krishnaprasad, P. S.
2017-04-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.
Geometric decompositions of collective motion
Krishnaprasad, P. S.
2017-01-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319
Measurements With a Split-Fiber Probe in Complex Unsteady Flows
NASA Technical Reports Server (NTRS)
Lepicovsky, Jan
2004-01-01
A split-fiber probe was used to acquire unsteady data in a research compressor. A calibration method was devised for a split-fiber probe, and a new algorithm was developed to decompose split-fiber probe signals into velocity magnitude and direction. The algorithm is based on the minimum value of a merit function that is built over the entire range of flow velocities for which the probe was calibrated. The split-fiber probe performance and signal decomposition was first verified in a free-jet facility by comparing the data from three thermo-anemometric probes, namely a single-wire, a single-fiber, and the split-fiber probe. All three probes performed extremely well as far as the velocity magnitude was concerned. However, there are differences in the peak values of measured velocity unsteadiness in the jet shear layer. The single-wire probe indicates the highest unsteadiness level, followed closely by the split-fiber probe. The single-fiber probe indicates a noticeably lower level of velocity unsteadiness. Experiments in the NASA Low Speed Axial Compressor facility revealed similar results. The mean velocities agreed well, and differences in the velocity unsteadiness are similar to the case of a free jet. A reason for these discrepancies is in the different frequency response characteristics of probes used. It follows that the single-fiber probe has the slowest frequency response. In summary, the split-fiber probe worked reliably during the entire program. The acquired data averaged in time followed closely data acquired by conventional pneumatic probes.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
Identification and differentiation of methcathinone analogs by gas chromatography-mass spectrometry.
Tsujikawa, Kenji; Mikuma, Toshiyasu; Kuwayama, Kenji; Miyaguchi, Hajime; Kanamori, Tatsuyuki; Iwata, Yuko T; Inoue, Hiroyuki
2013-08-01
To overcome a number of challenges involved in analyzing methcathinone (MC) analogues, we performed gas chromatography-mass spectrometry (GC-MS) analysis, including sample preparation, of nine MC analogues - 4-methylmethcathinone, three positional isomers of fluoromethcathinones, 4-methoxymethcathinone, N-ethylcathinone, N,N-dimethylcathinone, buphedrone, and pentedrone. The MC analogues underwent dehydrogenation when the free bases were analyzed using splitless injection. Most of this thermal degradation was prevented using split injection. This indicated that a shorter residence time in the hot injector prevented decomposition. Uniquely, 2-fluoromethcathinone degraded to another product in a process that could not be prevented by the split injection. Replacing the liner with a new, clean one was also effective in preventing thermal degradation. Most of the analytes showed a substantial loss (>30%) when the free base solution in ethyl acetate was evaporated under a nitrogen stream. Adding a small amount of dimethylformamide as a solvent keeper had a noticeable effect, but it did not completely prevent the loss. Three positional isomers of fluoromethcathinones were separated with baseline resolution by heptafluorobutyrylation with a slow column heating rate (8 °C/min) using a non-polar DB-5 ms capillary column. These results will be useful for the forensic analysis of MC analogues in confiscated materials. Copyright © 2012 John Wiley & Sons, Ltd.
Triple/quadruple patterning layout decomposition via linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2017-04-01
As the feature size of the semiconductor technology scales down to 10 nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies, such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL), and directed self-assembly. Due to the delay of EUVL and EBL, triple and even quadruple patterning is considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, whereas it is forbidden for contact and via layers. We focus on the application of layout decomposition where stitching is not allowed, such as for contact and via layers. We propose a linear programming (LP) and iterative rounding solving technique to reduce the number of nonintegers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi
2017-03-01
Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Potential gains from hospital mergers in Denmark.
Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller
2010-12-01
The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Dubnikova, Faina; Tamburu, Carmen; Lifshitz, Assa
2016-09-29
The isomerization of o-quinolyl ↔ o-isoquinolyl radicals and their thermal decomposition were studied by quantum chemical methods, where potential energy surfaces of the reaction channels and their kinetics rate parameters were determined. A detailed kinetics scheme containing 40 elementary steps was constructed. Computer simulations were carried out to determine the isomerization mechanism and the distribution of reaction products in the decomposition. The calculated mole percent of the stable products was compared to the experimental values that were obtained in this laboratory in the past, using the single pulse shock tube. The agreement between the experimental and the calculated mole percents was very good. A map of the figures containing the mole percent's of eight stable products of the decomposition plotted vs T are presented. The fast isomerization of o-quinolyl → o-isoquinolyl radicals via the intermediate indene imine radical and the attainment of fast equilibrium between these two radicals is the reason for the identical product distribution regardless whether the reactant radical is o-quinolyl or o-isoquinolyl. Three of the main decomposition products of o-quinolyl radical, are those containing the benzene ring, namely, phenyl, benzonitrile, and phenylacetylene radicals. They undergo further decomposition mainly at high temperatures via two types of reactions: (1) Opening of the benzene ring in the radicals, followed by splitting into fragments. (2) Dissociative attachment of benzonitrile and phenyl acetylene by hydrogen atoms to form hydrogen cyanide and acetylene.
Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization
Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos
2015-01-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684
On simulating flow with multiple time scales using a method of averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Ding, Huanjun; Molloi, Sabee
2012-08-07
A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
Zhang, Wensheng; Edwards, Andrea; Fan, Wei; Zhu, Dongxiao; Zhang, Kun
2010-06-22
Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods. Using the singular value decomposition (SVD) technique, we developed a new computational tool, named svdPPCS (SVD-based Pattern Pairing and Chart Splitting), to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG) and malpighian tubule (MT) tissues of the line W118 of M. drosophila. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR < 0.1. One divergent module suggested the tissue-specific relationship between the expressions of mitochondrion-related genes and the aging process. This finding, together with others, may be of biological significance. The validity of the proposed SVD-based method was further verified by a simulation study, as well as the comparisons with regression analysis and cubic spline regression analysis plus PAM based clustering. svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time series data of related organisms or different tissues of the same organism under equivalent or similar experimental conditions. The general scheme can be directly extended to the comparisons of multiple data sets. It also can be applied to the integration of data sets from different platforms and of different sources.
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.
Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.
Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos
2015-03-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimmel, Anna V.; Sushko, Peter V.; Shluger, Alexander L.
The authors have calculated the electronic structure of individual 1,1-diamino-2,2-dinitroethylene molecules (FOX-7) in the gas phase by means of density functional theory with the hybrid B3LYP functional and 6-31+G(d,p) basis set and considered their dissociation pathways. Positively and negatively charged states as well as the lowest excited states of the molecule were simulated. They found that charging and excitation can not only reduce the activation barriers for decomposition reactions but also change the dominating chemistry from endo- to exothermic type. In particular, they found that there are two competing primary initiation mechanisms of FOX-7 decomposition: C-NO{sub 2} bond fission andmore » C-NO{sub 2} to CONO isomerization. Electronic excitation or charging of FOX-7 disfavors CONO formation and, thus, terminates this channel of decomposition. However, if CONO is formed from the neutral FOX-7 molecule, charge trapping and/or excitation results in spontaneous splitting of an NO group accompanied by the energy release. Intramolecular hydrogen transfer is found to be a rare event in FOX-7 unless free electrons are available in the vicinity of the molecule, in which case HONO formation is a feasible exothermic reaction with a relatively low energy barrier. The effect of charged and excited states on other possible reactions is also studied. Implications of the obtained results to FOX-7 decomposition in condensed state are discussed.« less
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kutler, Paul (Technical Monitor)
1998-01-01
Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.
Chern-Simons-Antoniadis-Savvidy forms and standard supergravity
NASA Astrophysics Data System (ADS)
Izaurieta, F.; Salgado, P.; Salgado, S.
2017-04-01
In the context of the so called the Chern-Simons-Antoniadis-Savvidy (ChSAS) forms, we use the methods for FDA decomposition in 1-forms to construct a four-dimensional ChSAS supergravity action for the Maxwell superalgebra. On the another hand, we use the Extended Cartan Homotopy Formula to find a method that allows the separation of the ChSAS action into bulk and boundary contributions and permits the splitting of the bulk Lagrangian into pieces that reflect the particular subspace structure of the gauge algebra.
Characteristic-based algorithms for flows in thermo-chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David
1990-01-01
A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.
Moulisová, Vladimíra; Luer, Larry; Hoseinkhani, Sajjad; Brotosudarmo, Tatas H P; Collins, Aaron M; Lanzani, Guglielmo; Blankenship, Robert E; Cogdell, Richard J
2009-12-02
Energy transfer processes in photosynthetic light harvesting 2 (LH2) complexes isolated from purple bacterium Rhodopseudomonas palustris grown at different light intensities were studied by ground state and transient absorption spectroscopy. The decomposition of ground state absorption spectra shows contributions from B800 and B850 bacteriochlorophyll (BChl) a rings, the latter component splitting into a low energy and a high energy band in samples grown under low light (LL) conditions. A spectral analysis reveals strong inhomogeneity of the B850 excitons in the LL samples that is well reproduced by an exponential-type distribution. Transient spectra show a bleach of both the low energy and high energy bands, together with the respective blue-shifted exciton-to-biexciton transitions. The different spectral evolutions were analyzed by a global fitting procedure. Energy transfer from B800 to B850 occurs in a mono-exponential process and the rate of this process is only slightly reduced in LL compared to high light samples. In LL samples, spectral relaxation of the B850 exciton follows strongly nonexponential kinetics that can be described by a reduction of the bleach of the high energy excitonic component and a red-shift of the low energetic one. We explain these spectral changes by picosecond exciton relaxation caused by a small coupling parameter of the excitonic splitting of the BChl a molecules to the surrounding bath. The splitting of exciton energy into two excitonic bands in LL complex is most probably caused by heterogenous composition of LH2 apoproteins that gives some of the BChls in the B850 ring B820-like site energies, and causes a disorder in LH2 structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostini, Federica; Abedi, Ali; Suzuki, Yasumitsu
The decomposition of electronic and nuclear motion presented in Abedi et al. [Phys. Rev. Lett. 105, 123002 (2010)] yields a time-dependent potential that drives the nuclear motion and fully accounts for the coupling to the electronic subsystem. Here, we show that propagation of an ensemble of independent classical nuclear trajectories on this exact potential yields dynamics that are essentially indistinguishable from the exact quantum dynamics for a model non-adiabatic charge transfer problem. We point out the importance of step and bump features in the exact potential that are critical in obtaining the correct splitting of the quasiclassical nuclear wave packetmore » in space after it passes through an avoided crossing between two Born-Oppenheimer surfaces and analyze their structure. Finally, an analysis of the exact potentials in the context of trajectory surface hopping is presented, including preliminary investigations of velocity-adjustment and the force-induced decoherence effect.« less
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
NASA Astrophysics Data System (ADS)
Schneider, Jens; Holzer, Frank; Kraus, Markus; Kopinke, Frank-Dieter; Roland, Ulf
2016-10-01
The application of radio waves with a frequency of 13.56 MHz on electrolyte solutions in a capillary reactor led to the formation of reactive hydrogen and oxygen species and finally to molecular oxygen and hydrogen. This process of water splitting can be principally used for the elimination of hazardous chemicals in water. Two compounds, namely perfluorooctanoic acid (PFOA) and tetrahydrofuran, were converted using this process. Their main decomposition products were highly volatile and therefore transferred to a gas phase, where they could be identified by GC-MS analyses. It is remarkable that the chemical reactions could benefit from both the oxidizing and reducing species formed in the plasma process, which takes place in gas bubbles saturated with water vapor. The breaking of C-C and C-F bonds was proven in the case of PFOA, probably initiated by electron impacts and radical reactions.
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
DOT National Transportation Integrated Search
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
Nanostructured hematite for photoelectrochemical water splitting
NASA Astrophysics Data System (ADS)
Ling, Yichuan
Solar water splitting is an environmentally friendly reaction of producing hydrogen gas. Since Honda and Fujishima first demonstrated solar water splitting in 1972 by using semiconductor titanium dioxide (TiO2) as photoanode in a photoelectrochemical (PEC) cell, extensive efforts have been invested into improving the solar-to-hydrogen (STH) conversion efficiency and lower the production cost of photoelectrochemical devices. In the last few years, hematite (alpha-Fe2O3) nanostructures have been extensively studied as photoanodes for PEC water splitting. Although nanostructured hematite can improve its photoelectrochemical water splitting performance to some extent, by increasing active sites for water oxidation and shortening photogenerated hole path length to semiconductor/electrolyte interface, the photoactivity of pristine hematite nanostructures is still limited by a number of factors, such as poor electrical conductivities and slow oxygen evolution reaction kinetics. Previous studies have shown that tin (Sn) as an n-type dopant can substantially enhance the photoactivity of hematite photoanodes by modifying their optical and electrical properties. In this thesis, I will first demonstrate an unintentional Sn-doping method via high temperature annealing of hematite nanowires grown on fluorine-doped tin oxide (FTO) substrate to enhance the donor density. In addition to introducing extrinsic dopants into semiconductors, the carrier densities of hematite can also be enhanced by creating intrinsic defects. Oxygen vacancies function as shallow donors for a number of hematite. In this regard, I have investigated the influence of oxygen content on thermal decomposition of FeOOH to induce oxygen vacancies in hematite. In the end, I have studied low temperature activation of hematite nanostructures.
Overlapping Community Detection based on Network Decomposition
NASA Astrophysics Data System (ADS)
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
Sharma, Anshul; Kaur, Jasmine; Lee, Sulhee; Park, Young-Seo
2018-06-01
In the present study, 35 Leuconostoc mesenteroides strains isolated from vegetables and food products from South Korea were studied by multilocus sequence typing (MLST) of seven housekeeping genes (atpA, groEL, gyrB, pheS, pyrG, rpoA, and uvrC). The fragment sizes of the seven amplified housekeeping genes ranged in length from 366 to 1414 bp. Sequence analysis indicated 27 different sequence types (STs) with 25 of them being represented by a single strain indicating high genetic diversity, whereas the remaining 2 were characterized by five strains each. In total, 220 polymorphic nucleotide sites were detected among seven housekeeping genes. The phylogenetic analysis based on the STs of the seven loci indicated that the 35 strains belonged to two major groups, A (28 strains) and B (7 strains). Split decomposition analysis showed that intraspecies recombination played a role in generating diversity among strains. The minimum spanning tree showed that the evolution of the STs was not correlated with food source. This study signifies that the multilocus sequence typing is a valuable tool to access the genetic diversity among L. mesenteroides strains from South Korea and can be used further to monitor the evolutionary changes.
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941
Ding, Huanjun; Ducote, Justin L.; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of breast tissue composition in terms of water, lipid, and protein with a cadmium-zinc-telluride (CZT) based computed tomography (CT) system to help better characterize suspicious lesions. Methods: Simulations and experimental studies were performed using a spectral CT system equipped with a CZT-based photon-counting detector with energy resolution. Simulations of the figure-of-merit (FOM), the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), were performed to find the optimal configuration of the experimental acquisition parameters. A calibration phantom 3.175 cm in diameter was constructed from polyoxymethylene plastic with cylindrical holes that were filled with water and oil. Similarly, sized samples of pure adipose and pure lean bovine tissues were used for the three-material decomposition. Tissue composition results computed from the images were compared to the chemical analysis data of the tissue samples. Results: The beam energy was selected to be 100 kVp with a splitting energy of 40 keV. The tissue samples were successfully decomposed into water, lipid, and protein contents. The RMS error of the volumetric percentage for the three-material decomposition, as compared to data from the chemical analysis, was estimated to be approximately 5.7%. Conclusions: The results of this study suggest that the CZT-based photon-counting detector may be employed in the CT system to quantify the water, lipid, and protein mass densities in tissue with a relatively good agreement. PMID:22380361
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Chu, Fulei; Zuo, Ming J.
2011-03-01
Energy separation algorithm is good at tracking instantaneous changes in frequency and amplitude of modulated signals, but it is subject to the constraints of mono-component and narrow band. In most cases, time-varying modulated vibration signals of machinery consist of multiple components, and have so complicated instantaneous frequency trajectories on time-frequency plane that they overlap in frequency domain. For such signals, conventional filters fail to obtain mono-components of narrow band, and their rectangular decomposition of time-frequency plane may split instantaneous frequency trajectories thus resulting in information loss. Regarding the advantage of generalized demodulation method in decomposing multi-component signals into mono-components, an iterative generalized demodulation method is used as a preprocessing tool to separate signals into mono-components, so as to satisfy the requirements by energy separation algorithm. By this improvement, energy separation algorithm can be generalized to a broad range of signals, as long as the instantaneous frequency trajectories of signal components do not intersect on time-frequency plane. Due to the good adaptability of energy separation algorithm to instantaneous changes in signals and the mono-component decomposition nature of generalized demodulation, the derived time-frequency energy distribution has fine resolution and is free from cross term interferences. The good performance of the proposed time-frequency analysis is illustrated by analyses of a simulated signal and the on-site recorded nonstationary vibration signal of a hydroturbine rotor during a shut-down transient process, showing that it has potential to analyze time-varying modulated signals of multi-components.
Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula
NASA Technical Reports Server (NTRS)
Yarrow, Maurice
1989-01-01
Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.
Direct and Indirect Effects of UV-B Exposure on Litter Decomposition: A Meta-Analysis
Song, Xinzhang; Peng, Changhui; Jiang, Hong; Zhu, Qiuan; Wang, Weifeng
2013-01-01
Ultraviolet-B (UV-B) exposure in the course of litter decomposition may have a direct effect on decomposition rates via changing states of photodegradation or decomposer constitution in litter while UV-B exposure during growth periods may alter chemical compositions and physical properties of plants. Consequently, these changes will indirectly affect subsequent litter decomposition processes in soil. Although studies are available on both the positive and negative effects (including no observable effects) of UV-B exposure on litter decomposition, a comprehensive analysis leading to an adequate understanding remains unresolved. Using data from 93 studies across six biomes, this introductory meta-analysis found that elevated UV-B directly increased litter decomposition rates by 7% and indirectly by 12% while attenuated UV-B directly decreased litter decomposition rates by 23% and indirectly increased litter decomposition rates by 7%. However, neither positive nor negative effects were statistically significant. Woody plant litter decomposition seemed more sensitive to UV-B than herbaceous plant litter except under conditions of indirect effects of elevated UV-B. Furthermore, levels of UV-B intensity significantly affected litter decomposition response to UV-B (P<0.05). UV-B effects on litter decomposition were to a large degree compounded by climatic factors (e.g., MAP and MAT) (P<0.05) and litter chemistry (e.g., lignin content) (P<0.01). Results suggest these factors likely have a bearing on masking the important role of UV-B on litter decomposition. No significant differences in UV-B effects on litter decomposition were found between study types (field experiment vs. laboratory incubation), litter forms (leaf vs. needle), and decay duration. Indirect effects of elevated UV-B on litter decomposition significantly increased with decay duration (P<0.001). Additionally, relatively small changes in UV-B exposure intensity (30%) had significant direct effects on litter decomposition (P<0.05). The intent of this meta-analysis was to improve our understanding of the overall effects of UV-B on litter decomposition. PMID:23818993
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Adimpong, David B; Nielsen, Dennis S; Sørensen, Kim I; Vogensen, Finn K; Sawadogo-Lingani, Hagrétou; Derkx, Patrick M F; Jespersen, Lene
2013-10-01
Lactobacillus delbrueckii is divided into five subspecies based on phenotypic and genotypic differences. A novel isolate, designated ZN7a-9(T), was isolated from malted sorghum wort used for making an alcoholic beverage (dolo) in Burkina Faso. The results of 16S rRNA gene sequencing, DNA-DNA hybridization and peptidoglycan cell-wall structure type analyses indicated that it belongs to the species L. delbrueckii. The genome sequence of isolate ZN7a-9(T) was determined by Illumina-based sequencing. Multilocus sequence typing (MLST) and split-decomposition analyses were performed on seven concatenated housekeeping genes obtained from the genome sequence of strain ZN7a-9(T) together with 41 additional L. delbrueckii strains. The results of the MLST and split-decomposition analyses could not establish the exact subspecies of L. delbrueckii represented by strain ZN7a-9(T) as it clustered with L. delbrueckii strains unassigned to any of the recognized subspecies of L. delbrueckii. Strain ZN7a-9(T) additionally differed from the recognized type strains of the subspecies of L. delbrueckii with respect to its carbohydrate fermentation profile. In conclusion, the cumulative results indicate that strain ZN7a-9(T) represents a novel subspecies of L. delbrueckii closely related to Lactobacillus delbrueckii subsp. lactis and Lactobacillus delbrueckii subsp. delbrueckii for which the name Lactobacillus delbrueckii subsp. jakobsenii subsp. nov. is proposed. The type strain is ZN7a-9(T) = DSM 26046(T) = LMG 27067(T).
StackSplit - a plugin for multi-event shear wave splitting analyses in SplitLab
NASA Astrophysics Data System (ADS)
Grund, Michael
2017-04-01
The SplitLab package (Wüstefeld et al., Computers and Geosciences, 2008), written in MATLAB, is a powerful and widely used tool for analysing seismological shear wave splitting of single event measurements. However, in many cases, especially temporary station deployments close to seaside or for recordings affected by strong anthropogenic noise, only multi-event approaches provide stable and reliable splitting results. In order to extend the original SplitLab environment for such analyses, I present the StackSplit plugin that can easily be implemented within the well accepted main program. StackSplit grants easy access to several different analysis approaches within SplitLab, including a new multiple waveform based inversion method as well as the most established standard stacking procedures. The possibility to switch between different analysis approaches at any time allows the user for the most flexible processing of individual multi-event splitting measurements for a single recording station. Besides the provided functions of the plugin, no other external program is needed for the multi-event analyses since StackSplit performs within the available SplitLab structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
NASA Astrophysics Data System (ADS)
Fang, Yiqi; Lu, Qinghong; Wang, Xiaolei; Zhang, Wuhong; Chen, Lixiang
2017-02-01
The study of vortex dynamics is of fundamental importance in understanding the structured light's propagation behavior in the realm of singular optics. Here, combining with the large-angle holographic lithography in photoresist, a simple experiment to trace and visualize the vortex birth and splitting of light fields induced by various fractional topological charges is reported. For a topological charge M =1.76 , the recorded microstructures reveal that although it finally leads to the formation of a pair of fork gratings, these two vortices evolve asynchronously. More interestingly, it is observed on the submicron scale that high-order topological charges M =3.48 and 3.52, respectively, give rise to three and four characteristic forks embedded in the samples with one-wavelength resolution of about 450 nm. Numerical simulations based on orbital angular momentum eigenmode decomposition support well the experimental observations. Our method could be applied effectively to study other structured matter waves, such as the electron and neutron beams.
Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.
Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay
2017-02-01
There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1991-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1992-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
Inlet Guide Vane Wakes Including Rotor Effects
NASA Astrophysics Data System (ADS)
Johnston, R. T.; Fleeter, S.
2001-02-01
Fundamental experiments are described directed at the investigation of forcing functions generated by an inlet guide vane (IGV) row, including interactions with the downstream rotor, for application to turbomachine forced response design systems. The experiments are performed in a high-speed research fan facility comprised of an IGV row upstream of a rotor. IGV-rotor axial spacing is variable, with the IGV row able to be indexed circumferentially, thereby allowing measurements to be made across several IGV wakes. With an IGV relative Mach number of 0.29, measurements include the IGV wake pressure and velocity fields for three IGV-rotor axial spacings. The decay characteristics of the IGV wakes are compared to the Majjigi and Gliebe empirical correlations. After Fourier decomposition, a vortical-potential gust splitting analysis is implemented to determine the vortical and potential harmonic wake gust forcing functions both upstream and downstream of the rotor. Higher harmonics of the vortical gust component of the IGV wakes are found to decay at a uniform rate due to viscous diffusion.
StackSplit - a plugin for multi-event shear wave splitting analyses in SplitLab
NASA Astrophysics Data System (ADS)
Grund, Michael
2017-08-01
SplitLab is a powerful and widely used tool for analysing seismological shear wave splitting of single event measurements. However, in many cases, especially temporary station deployments close to the noisy seaside, ocean bottom or for recordings affected by strong anthropogenic noise, only multi-event approaches provide stable and reliable splitting results. In order to extend the original SplitLab environment for such analyses, I present the StackSplit plugin that can easily be implemented within the well accepted main program. StackSplit grants easy access to several different analysis approaches within SplitLab, including a new multiple waveform based inversion method as well as the most established standard stacking procedures. The possibility to switch between different analysis approaches at any time allows the user for the most flexible processing of individual multi-event splitting measurements for a single recording station. Besides the provided functions of the plugin, no other external program is needed for the multi-event analyses since StackSplit performs within the available SplitLab structure which is based on MATLAB. The effectiveness and use of this plugin is demonstrated with data examples of a long running seismological recording station in Finland.
Model-size reduction for the buckling and vibration analyses of anisotropic panels
NASA Technical Reports Server (NTRS)
Noor, A. K.; Whitworth, S. L.
1986-01-01
A computational procedure is presented for reducing the size of the model used in the buckling and vibration analyses of symmetric anisotropic panels to that of the corresponding orthotropic model. The key elements of the procedure are the application of an operator splitting technique through the decomposition of the material stiffness matrix of the panel into the sum of orthotropic and nonorthotropic (anisotropic) parts and the use of a reduction method through successive application of the finite element method and the classical Rayleigh-Ritz technique. The effectiveness of the procedure is demonstrated by numerical examples.
Thermochemical generation of hydrogen
NASA Technical Reports Server (NTRS)
Lawson, D. D.; Petersen, G. R. (Inventor)
1982-01-01
The direct fluid contact heat exchange with H2SO4 at about 330 C prior to high temperature decomposition at about 830 C in the oxygen release step of several thermochemical cycles for splitting water into hydrogen and oxygen provides higher heat transfer rates, savings in energy and permits use of cast vessels rather than expensive forged alloy indirect heat exchangers. Among several candidate perfluorocarbon liquids tested, only perfluoropropylene oxide polymers having a degree of polymerization from about 10 to 60 were chemically stable, had low miscibility and vapor pressure when tested with sulfuric acid at temperatures from 300 C to 400 C.
SplitRacer - a new Semi-Automatic Tool to Quantify And Interpret Teleseismic Shear-Wave Splitting
NASA Astrophysics Data System (ADS)
Reiss, M. C.; Rumpker, G.
2017-12-01
We have developed a semi-automatic, MATLAB-based GUI to combine standard seismological tasks such as the analysis and interpretation of teleseismic shear-wave splitting. Shear-wave splitting analysis is widely used to infer seismic anisotropy, which can be interpreted in terms of lattice-preferred orientation of mantle minerals, shape-preferred orientation caused by fluid-filled cracks or alternating layers. Seismic anisotropy provides a unique link between directly observable surface structures and the more elusive dynamic processes in the mantle below. Thus, resolving the seismic anisotropy of the lithosphere/asthenosphere is of particular importance for geodynamic modeling and interpretations. The increasing number of seismic stations from temporary experiments and permanent installations creates a new basis for comprehensive studies of seismic anisotropy world-wide. However, the increasingly large data sets pose new challenges for the rapid and reliably analysis of teleseismic waveforms and for the interpretation of the measurements. Well-established routines and programs are available but are often impractical for analyzing large data sets from hundreds of stations. Additionally, shear wave splitting results are seldom evaluated using the same well-defined quality criteria which may complicate comparison with results from different studies. SplitRacer has been designed to overcome these challenges by incorporation of the following processing steps: i) downloading of waveform data from multiple stations in mseed-format using FDSNWS tools; ii) automated initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold; iii) particle-motion analysis of selected phases at longer periods to detect and correct for sensor misalignment; iv) splitting analysis of selected phases based on transverse-energy minimization for multiple, randomly-selected, relevant time windows; v) one and two-layer joint-splitting analysis for all phases at one station by simultaneously minimizing their transverse energy - this includes the analysis of null measurements. vi) comparison of results with theoretical splitting parameters determined for one, two, or continuously-varying anisotropic layer(s). Examples for the application of SplitRacer will be presented.
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Steenen, S A; van Wijk, A J; Becking, A G
2016-08-01
An unfavourable and unanticipated pattern of the bilateral sagittal split osteotomy (BSSO) is generally referred to as a 'bad split'. Patient factors predictive of a bad split reported in the literature are controversial. Suggested risk factors are reviewed in this article. A systematic review was undertaken, yielding a total of 30 studies published between 1971 and 2015 reporting the incidence of bad split and patient age, and/or surgical technique employed, and/or the presence of third molars. These included 22 retrospective cohort studies, six prospective cohort studies, one matched-pair analysis, and one case series. Spearman's rank correlation showed a statistically significant but weak correlation between increasing average age and increasing occurrence of bad splits in 18 studies (ρ=0.229; P<0.01). No comparative studies were found that assessed the incidence of bad split among the different splitting techniques. A meta-analysis pooling the effect sizes of seven cohort studies showed no significant difference in the incidence of bad split between cohorts of patients with third molars present and concomitantly removed during surgery, and patients in whom third molars were removed at least 6 months preoperatively (odds ratio 1.16, 95% confidence interval 0.73-1.85, Z=0.64, P=0.52). In summary, there is no robust evidence to date to show that any risk factor influences the incidence of bad split. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guenet, B.; Eglin, T.; Vasilyeva, N.; Peylin, P.; Ciais, P.; Chenu, C.
2013-04-01
Soil is the major terrestrial reservoir of carbon and a substantial part of this carbon is stored in deep layers, typically deeper than 50 cm below the surface. Several studies underlined the quantitative importance of this deep soil organic carbon (SOC) pool and models are needed to better understand this stock and its evolution under climate and land-uses changes. In this study, we tested and compared three simple theoretical models of vertical transport for SOC against SOC profiles measurements from a long-term bare fallow experiment carried out by the Central-Chernozem State Natural Biosphere Reserve in the Kursk Region of Russia. The transport schemes tested are diffusion, advection and both diffusion and advection. They are coupled to three different formulations of soil carbon decomposition kinetics. The first formulation is a first order kinetics widely used in global SOC decomposition models; the second one, so-called "priming" model, links SOC decomposition rate to the amount of fresh organic matter, representing the substrate interactions. The last one is also a first order kinetics, but SOC is split into two pools. Field data are from a set of three bare fallow plots where soil received no input during the past 20, 26 and 58 yr, respectively. Parameters of the models were optimised using a Bayesian method. The best results are obtained when SOC decomposition is assumed to be controlled by fresh organic matter (i.e., the priming model). In comparison to the first-order kinetic model, the priming model reduces the overestimation in the deep layers. We also observed that the transport scheme that improved the fit with the data depended on the soil carbon mineralisation formulation chosen. When soil carbon decomposition was modelled to depend on the fresh organic matter amount, the transport mechanism which improved best the fit to the SOC profile data was the model representing both advection and diffusion. Interestingly, the older the bare fallow is, the lesser the need for diffusion is, suggesting that stabilised carbon may not be transported within the profile by the same mechanisms than more labile carbon.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
NASA Astrophysics Data System (ADS)
Salehi, Aliyeh; Fallah, Seyfollah; Sourki, Ali Abasi
2017-01-01
Cattle manure has a high carbon/nitrogen ratio and may not decompose; therefore, full-dose application of urea fertilizer might improve biological properties by increasing manure decomposition. This study aimed to investigate the effect of combining cattle manure and urea fertilizer on soil CO2 flux, microbial biomass carbon, and dry matter accumulation during Nigella sativa L. (black cumin) growth under field conditions. The treatments were control, cattle manure, urea, different levels of split and full-dose integrated fertilizer. The results showed that integrated application of cattle manure and chemical fertilizer significantly increased microbial biomass carbon by 10%, soil organic carbon by 2.45%, total N by 3.27%, mineral N at the flowering stage by 7.57%, and CO2 flux by 9% over solitary urea application. Integrated application increased microbial biomass carbon by 10% over the solitary application and the full-dose application by 5% over the split application. The soil properties and growth parameters of N. sativa L. benefited more from the full-dose application than the split application of urea. Cattle manure combined with chemical fertilizer and the full-dose application of urea increased fertilizer efficiency and improved biological soil parameters and plant growth. This method decreased the cost of top dressing urea fertilizer and proved beneficial for the environment and medicinal plant health.
Xingyan Huang; Cornelis F. De Hoop; Jiulong Xie; Chung-Yun Hse; Jinqiu Qi; Yuzhu Chen; Feng Li
2017-01-01
The thermal decomposition characteristics of microwave liquefied rape straw residues with respect to liquefaction condition and pyrolysis conversion were investigated using a thermogravimetric (TG) analyzer at the heating rates of 5, 20, 50 °C min-1. The hemicellulose decomposition peak was absent at the derivative thermogravimetric analysis (DTG...
2014-01-01
Background Split-mouth randomized controlled trials (RCTs) are popular in oral health research. Meta-analyses frequently include trials of both split-mouth and parallel-arm designs to derive combined intervention effects. However, carry-over effects may induce bias in split- mouth RCTs. We aimed to assess whether intervention effect estimates differ between split- mouth and parallel-arm RCTs investigating the same questions. Methods We performed a meta-epidemiological study. We systematically reviewed meta- analyses including both split-mouth and parallel-arm RCTs with binary or continuous outcomes published up to February 2013. Two independent authors selected studies and extracted data. We used a two-step approach to quantify the differences between split-mouth and parallel-arm RCTs: for each meta-analysis. First, we derived ratios of odds ratios (ROR) for dichotomous data and differences in standardized mean differences (∆SMD) for continuous data; second, we pooled RORs or ∆SMDs across meta-analyses by random-effects meta-analysis models. Results We selected 18 systematic reviews, for 15 meta-analyses with binary outcomes (28 split-mouth and 28 parallel-arm RCTs) and 19 meta-analyses with continuous outcomes (28 split-mouth and 28 parallel-arm RCTs). Effect estimates did not differ between split-mouth and parallel-arm RCTs (mean ROR, 0.96, 95% confidence interval 0.52–1.80; mean ∆SMD, 0.08, -0.14–0.30). Conclusions Our study did not provide sufficient evidence for a difference in intervention effect estimates derived from split-mouth and parallel-arm RCTs. Authors should consider including split-mouth RCTs in their meta-analyses with suitable and appropriate analysis. PMID:24886043
Variability of the western Pacific warm pool structure associated with El Niño
NASA Astrophysics Data System (ADS)
Hu, Shijian; Hu, Dunxin; Guan, Cong; Xing, Nan; Li, Jianping; Feng, Junqiao
2017-10-01
Sea surface temperature (SST) structure inside the western Pacific warm pool (WPWP) is usually overlooked because of its distinct homogeneity, but in fact it possesses a clear meridional high-low-high pattern. Here we show that the SST low in the WPWP is significantly intensified in July-October of El Niño years (especially extreme El Niño years) and splits the 28.5 °C-isotherm-defined WPWP (WPWP split for simplification). Composite analysis and heat budget analysis indicate that the enhanced upwelling due to positive wind stress curl anomaly and western propagating upwelling Rossby waves account for the WPWP split. Zonal advection at the eastern edge of split region plays a secondary role in the formation of the WPWP split. Composite analysis and results from a Matsuno-Gill model with an asymmetric cooling forcing imply that the WPWP split seems to give rise to significant anomalous westerly winds and intensify the following El Niño event. Lead-lag correlation shows that the WPWP split slightly leads the Niño 3.4 index.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Precision spectral manipulation of optical pulses using a coherent photon echo memory.
Buchler, B C; Hosseini, M; Hétet, G; Sparkes, B M; Lam, P K
2010-04-01
Photon echo schemes are excellent candidates for high efficiency coherent optical memory. They are capable of high-bandwidth multipulse storage, pulse resequencing and have been shown theoretically to be compatible with quantum information applications. One particular photon echo scheme is the gradient echo memory (GEM). In this system, an atomic frequency gradient is induced in the direction of light propagation leading to a Fourier decomposition of the optical spectrum along the length of the storage medium. This Fourier encoding allows precision spectral manipulation of the stored light. In this Letter, we show frequency shifting, spectral compression, spectral splitting, and fine dispersion control of optical pulses using GEM.
Raznikova, M O; Raznikov, V V
2015-01-01
In this work, information relating to charge states of biomolecule ions in solution obtained using the electrospray ionization mass spectrometry of different biopolymers is analyzed. The data analyses have mainly been carried out by solving an inverse problem of calculating the probabilities of retention of protons and other charge carriers by ionogenic groups of biomolecules with known primary structures. The approach is a new one and has no known to us analogues. A program titled "Decomposition" was developed and used to analyze the charge distribution of ions of native and denatured cytochrome c mass spectra. The possibility of splitting of the charge-state distribution of albumin into normal components, which likely corresponds to various conformational states of the biomolecule, has been demonstrated. The applicability criterion for using previously described method of decomposition of multidimensional charge-state distributions with two charge carriers, e.g., a proton and a sodium ion, to characterize the spatial structure of biopolymers in solution has been formulated. In contrast to known mass-spectrometric approaches, this method does not require the use of enzymatic hydrolysis or collision-induced dissociation of the biopolymers.
Interactions of double patterning technology with wafer processing, OPC and design flows
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Cork, Chris; Miloslavsky, Alex; Luk-Pat, Gerry; Barnes, Levi; Hapli, John; Lewellen, John; Rollins, Greg; Wiaux, Vincent; Verhaegen, Staf
2008-03-01
Double patterning technology (DPT) is one of the main options for printing logic devices with half-pitch less than 45nm; and flash and DRAM memory devices with half-pitch less than 40nm. DPT methods decompose the original design intent into two individual masking layers which are each patterned using single exposures and existing 193nm lithography tools. The results of the individual patterning layers combine to re-create the design intent pattern on the wafer. In this paper we study interactions of DPT with lithography, masks synthesis and physical design flows. Double exposure and etch patterning steps create complexity for both process and design flows. DPT decomposition is a critical software step which will be performed in physical design and also in mask synthesis. Decomposition includes cutting (splitting) of original design intent polygons into multiple polygons where required; and coloring of the resulting polygons. We evaluate the ability to meet key physical design goals such as: reduce circuit area; minimize rework; ensure DPT compliance; guarantee patterning robustness on individual layer targets; ensure symmetric wafer results; and create uniform wafer density for the individual patterning layers.
Seismic receiver function interpretation: Ps splitting or anisotropic underplating?
NASA Astrophysics Data System (ADS)
Liu, Z.; Park, J. J.
2016-12-01
Crustal anisotropy is crucial to understanding the evolutionary history of Earth's lithosphere. Shear-wave splitting of Moho P-to-s converted phases in receiver functions has often been used to infer crustal anisotropy. In addition to estimating birefringence directly, the harmonic variations of Moho Ps phases in delay times can be used to infer splitting parameters of averaged anisotropy in the crust. However, crustal anisotropy may localize at various levels within the crust due to complex deformational processes. Layered anisotropy requires careful investigation of the distribution of anisotropy before interpreting Moho Ps splitting. In this study, we show results from stations ARU in Russia, KIP in Hawaiian Islands and LSA in Tibetan Plateau, where layered anisotropy is well constrained by intra-crust Ps conversions at high frequencies using harmonic decomposition of multiple-taper correlation receiver functions. Anisotropic velocity models are inferred by forward-modeling decomposed RF waveforms. Our results of ARU and KIP show that the harmonic behavior of Moho Ps phases can be explained by a uniformly anisotropic crust model at lower cut-off frequencies, but higher-resolution RF-signals reveal a thin, highly anisotropic layer at the base of the crust. Station LSA tells a similar story with a twist: a modest Ps birefringence is revealed at high frequencies to stem from multiple thin (5-10-km) layers of localized anisotropy within the middle crust, but no strongly-sheared basal layer is inferred. We suggest that the harmonic variation of Moho Ps phases should always be investigated as a result of anisotropic layering using RFs with frequency content above 1Hz, rather than simply reporting averaged anisotropy of the whole crust.
Atomic Layer Deposition of Bismuth Vanadates for Solar Energy Materials.
Stefik, Morgan
2016-07-07
The fabrication of porous nanocomposites is key to the advancement of energy conversion and storage devices that interface with electrolytes. Bismuth vanadate, BiVO4 , is a promising oxide for solar water splitting where the controlled fabrication of BiVO4 layers within porous, conducting scaffolds has remained a challenge. Here, the atomic layer deposition of bismuth vanadates is reported from BiPh3 , vanadium(V) oxytriisopropoxide, and water. The resulting films have tunable stoichiometry and may be crystallized to form the photoactive scheelite structure of BiVO4 . A selective etching process was used with vanadium-rich depositions to enable the synthesis of phase-pure BiVO4 after spinodal decomposition. BiVO4 thin films were measured for photoelectrochemical performance under AM 1.5 illumination. The average photocurrents were 1.17 mA cm(-2) at 1.23 V versus the reversible hydrogen electrode using a hole-scavenging sulfite electrolyte. The capability to deposit conformal bismuth vanadates will enable a new generation of nanocomposite architectures for solar water splitting. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
2018-04-30
Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice
NASA Technical Reports Server (NTRS)
Thompson, James M.; Daniel, Janice D.
1989-01-01
The development of a mass spectrometer/thermal analyzer/computer (MS/TA/Computer) system capable of providing simultaneous thermogravimetry (TG), differential thermal analysis (DTA), derivative thermogravimetry (DTG) and evolved gas detection and analysis (EGD and EGA) under both atmospheric and high pressure conditions is described. The combined system was used to study the thermal decomposition of the nozzle material that constitutes the throat of the solid rocket boosters (SRB).
Sharp phase variations from the plasmon mode causing the Rabi-analogue splitting
NASA Astrophysics Data System (ADS)
Wang, Yujia; Sun, Chengwei; Gan, Fengyuan; Li, Hongyun; Gong, Qihuang; Chen, Jianjun
2017-06-01
The Rabi-analogue splitting in nanostructures resulting from the strong coupling of different resonant modes is of importance for lasing, sensing, switching, modulating, and quantum information processes. To give a clearer physical picture, the phase analysis instead of the strong coupling is provided to explain the Rabi-analogue splitting in the Fabry-Pérot (FP) cavity, of which one end mirror is a metallic nanohole array and the other is a thin metal film. The phase analysis is based on an analytic model of the FP cavity, in which the reflectance and the reflection phase of the end mirrors are dependent on the wavelength. It is found that the Rabi-analogue splitting originates from the sharp phase variation brought by the plasmon mode in the FP cavity. In the experiment, the Rabi-analogue splitting is realized in the plasmonic-photonic coupling system, and this splitting can be continually tuned by changing the length of the FP cavity. These experimental results agree well with the analytic and simulation data, strongly verifying the phase analysis based on the analytic model. The phase analysis presents a clear picture to understand the working mechanism of the Rabi-analogue splitting; thus, it may facilitate the design of the plasmonic-photonic and plasmonic-plasmonic coupling systems.
Analysis of operator splitting errors for near-limit flame simulations
NASA Astrophysics Data System (ADS)
Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.
2017-04-01
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.
Analysis of operator splitting errors for near-limit flame simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhen; Zhou, Hua; Li, Shan
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A
2005-04-07
The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.
NASA Astrophysics Data System (ADS)
Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.
2018-02-01
We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
Black hole perturbation under a 2 +2 decomposition in the action
NASA Astrophysics Data System (ADS)
Ripley, Justin L.; Yagi, Kent
2018-01-01
Black hole perturbation theory is useful for studying the stability of black holes and calculating ringdown gravitational waves after the collision of two black holes. Most previous calculations were carried out at the level of the field equations instead of the action. In this work, we compute the Einstein-Hilbert action to quadratic order in linear metric perturbations about a spherically symmetric vacuum background in Regge-Wheeler gauge. Using a 2 +2 splitting of spacetime, we expand the metric perturbations into a sum over scalar, vector, and tensor spherical harmonics, and dimensionally reduce the action to two dimensions by integrating over the two sphere. We find that the axial perturbation degree of freedom is described by a two-dimensional massive vector action, and that the polar perturbation degree of freedom is described by a two-dimensional dilaton massive gravity action. Varying the dimensionally reduced actions, we rederive covariant and gauge-invariant master equations for the axial and polar degrees of freedom. Thus, the two-dimensional massive vector and massive gravity actions we derive by dimensionally reducing the perturbed Einstein-Hilbert action describe the dynamics of a well-studied physical system: the metric perturbations of a static black hole. The 2 +2 formalism we present can be generalized to m +n -dimensional spacetime splittings, which may be useful in more generic situations, such as expanding metric perturbations in higher dimensional gravity. We provide a self-contained presentation of m +n formalism for vacuum spacetime splittings.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.
Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin
2017-11-15
Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.
NASA Technical Reports Server (NTRS)
Nairn, John A.
1992-01-01
A combined analytical and experimental study was conducted to analyze microcracking, microcrack-induced delamination, and longitudinal splitting in polymer matrix composites. Strain energy release rates, calculated by a variational analysis, were used in a failure criterion to predict microcracking. Predictions and test results were compared for static, fatigue, and cyclic thermal loading. The longitudinal splitting analysis accounted for the effects of fiber bridging. Test data are analyzed and compared for longitudinal splitting and delamination under mixed-mode loading. This study emphasizes the importance of using fracture mechanics analyses to understand the complex failure processes that govern composite strength and life.
NASA Technical Reports Server (NTRS)
Schroeder, M. A.
1980-01-01
A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
NASA Astrophysics Data System (ADS)
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
Nosworthy, Matthew G; Franczyk, Adam J; Medina, Gerardo; Neufeld, Jason; Appah, Paulyn; Utioh, Alphonsus; Frohlich, Peter; House, James D
2017-09-06
In order to determine the effect of extrusion, baking, and cooking on the protein quality of yellow and green split peas, a rodent bioassay was conducted and compared to an in vitro method of protein quality determination. The Protein Digestibility-Corrected Amino Acid Score (PDCAAS) of green split peas (71.4%) was higher than that of yellow split peas (67.8%), on average. Similarly, the average Digestible Indispensable Amino Acid Score (DIAAS) of green split peas (69%) was higher than that of yellow split peas (67%). Cooked green pea flour had lower PDCAAS and DIAAS values (69.19% and 67%) than either extruded (73.61%, 70%) or baked (75.22%, 70%). Conversely, cooked yellow split peas had the highest PDCCAS value (69.19%), while extruded yellow split peas had the highest DIAAS value (67%). Interestingly, a strong correlation was found between in vivo and in vitro analysis of protein quality (R 2 = 0.9745). This work highlights the differences between processing methods on pea protein quality and suggests that in vitro measurements of protein digestibility could be used as a surrogate for in vivo analysis.
NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K; Guérold, François
2016-03-01
Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Visible light water splitting using dye-sensitized oxide semiconductors.
Youngblood, W Justin; Lee, Seung-Hyun Anna; Maeda, Kazuhiko; Mallouk, Thomas E
2009-12-21
Researchers are intensively investigating photochemical water splitting as a means of converting solar to chemical energy in the form of fuels. Hydrogen is a key solar fuel because it can be used directly in combustion engines or fuel cells, or combined catalytically with CO(2) to make carbon containing fuels. Different approaches to solar water splitting include semiconductor particles as photocatalysts and photoelectrodes, molecular donor-acceptor systems linked to catalysts for hydrogen and oxygen evolution, and photovoltaic cells coupled directly or indirectly to electrocatalysts. Despite several decades of research, solar hydrogen generation is efficient only in systems that use expensive photovoltaic cells to power water electrolysis. Direct photocatalytic water splitting is a challenging problem because the reaction is thermodynamically uphill. Light absorption results in the formation of energetic charge-separated states in both molecular donor-acceptor systems and semiconductor particles. Unfortunately, energetically favorable charge recombination reactions tend to be much faster than the slow multielectron processes of water oxidation and reduction. Consequently, visible light water splitting has only recently been achieved in semiconductor-based photocatalytic systems and remains an inefficient process. This Account describes our approach to two problems in solar water splitting: the organization of molecules into assemblies that promote long-lived charge separation, and catalysis of the electrolysis reactions, in particular the four-electron oxidation of water. The building blocks of our artificial photosynthetic systems are wide band gap semiconductor particles, photosensitizer and electron relay molecules, and nanoparticle catalysts. We intercalate layered metal oxide semiconductors with metal nanoparticles. These intercalation compounds, when sensitized with [Ru(bpy)(3)](2+) derivatives, catalyze the photoproduction of hydrogen from sacrificial electron donors (EDTA(2-)) or non-sacrificial donors (I(-)). Through exfoliation of layered metal oxide semiconductors, we construct multilayer electron donor-acceptor thin films or sensitized colloids in which individual nanosheets mediate light-driven electron transfer reactions. When sensitizer molecules are "wired" to IrO(2).nH(2)O nanoparticles, a dye-sensitized TiO(2) electrode becomes the photoanode of a water-splitting photoelectrochemical cell. Although this system is an interesting proof-of-concept, the performance of these cells is still poor (approximately 1% quantum yield) and the dye photodegrades rapidly. We can understand the quantum efficiency and degradation in terms of competing kinetic pathways for water oxidation, back electron transfer, and decomposition of the oxidized dye molecules. Laser flash photolysis experiments allow us to measure these competing rates and, in principle, to improve the performance of the cell by changing the architecture of the electron transfer chain.
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Dan, Tong; Liu, Wenjun; Song, Yuqin; Xu, Haiyan; Menghe, Bilige; Zhang, Heping; Sun, Zhihong
2015-05-20
Lactobacillus fermentum is economically important in the production and preservation of fermented foods. A repeatable and discriminative typing method was devised to characterize L. fermentum at the molecular level. The multilocus sequence typing (MLST) scheme developed was based on analysis of the internal sequence of 11 housekeeping gene fragments (clpX, dnaA, dnaK, groEL, murC, murE, pepX, pyrG, recA, rpoB, and uvrC). MLST analysis of 203 isolates of L. fermentum from Mongolia and seven provinces/ autonomous regions in China identified 57 sequence types (ST), 27 of which were represented by only a single isolate, indicating high genetic diversity. Phylogenetic analyses based on the sequence of the 11 housekeeping gene fragments indicated that the L. fermentum isolates analyzed belonged to two major groups. A standardized index of association (I A (S)) indicated a weak clonal population structure in L. fermentum. Split decomposition analysis indicated that recombination played an important role in generating the genetic diversity observed in L. fermentum. The results from the minimum spanning tree strongly suggested that evolution of L. fermentum STs was not correlated with geography or food-type. The MLST scheme developed will be valuable for further studies on the evolution and population structure of L. fermentum isolates used in food products.
Dan, Tong; Liu, Wenjun; Sun, Zhihong; Lv, Qiang; Xu, Haiyan; Song, Yuqin; Zhang, Heping
2014-06-09
Economically, Leuconostoc lactis is one of the most important species in the genus Leuconostoc. It plays an important role in the food industry including the production of dextrans and bacteriocins. Currently, traditional molecular typing approaches for characterisation of this species at the isolate level are either unavailable or are not sufficiently reliable for practical use. Multilocus sequence typing (MLST) is a robust and reliable method for characterising bacterial and fungal species at the molecular level. In this study, a novel MLST protocol was developed for 50 L. lactis isolates from Mongolia and China. Sequences from eight targeted genes (groEL, carB, recA, pheS, murC, pyrG, rpoB and uvrC) were obtained. Sequence analysis indicated 20 different sequence types (STs), with 13 of them being represented by a single isolate. Phylogenetic analysis based on the sequences of eight MLST loci indicated that the isolates belonged to two major groups, A (34 isolates) and B (16 isolates). Linkage disequilibrium analyses indicated that recombination occurred at a low frequency in L. lactis, indicating a clonal population structure. Split-decomposition analysis indicated that intraspecies recombination played a role in generating genotypic diversity amongst isolates. Our results indicated that MLST is a valuable tool for typing L. lactis isolates that can be used for further monitoring of evolutionary changes and population genetics.
ERIC Educational Resources Information Center
Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki
2014-01-01
An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…
The Thermal Decomposition of Basic Copper(II) Sulfate.
ERIC Educational Resources Information Center
Tanaka, Haruhiko; Koga, Nobuyoshi
1990-01-01
Discussed is the preparation of synthetic brochantite from solution and a thermogravimetric-differential thermal analysis study of the thermal decomposition of this compound. Other analyses included are chemical analysis and IR spectroscopy. Experimental procedures and results are presented. (CW)
Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products
Dong, Ming; Ren, Ming; Ye, Rixin
2017-01-01
Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268
Solar activity and oscillation frequency splittings
NASA Technical Reports Server (NTRS)
Woodard, M. F.; Libbrecht, K. G.
1993-01-01
Solar p-mode frequency splittings, parameterized by the coefficients through order N = 12 of a Legendre polynomial expansion of the mode frequencies as a function of m/L, were obtained from an analysis of helioseismology data taken at Big Bear Solar Observatory during the 4 years 1986 and 1988-1990 (approximately solar minimum to maximum). Inversion of the even-index splitting coefficients confirms that there is a significant contribution to the frequency splittings originating near the solar poles. The strength of the polar contribution is anti correlated with the overall level or solar activity in the active latitudes, suggesting a relation to polar faculae. From an analysis of the odd-index splitting coefficients we infer an uppor limit to changes in the solar equatorial near-surface rotatinal velocity of less than 1.9 m/s (3 sigma limit) between solar minimum and maximum.
Baldrian, Petr; López-Mondéjar, Rubén
2014-02-01
Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.
NASA Technical Reports Server (NTRS)
Korzennik, Sylvain G.
1997-01-01
We have carried out the data reduction and analysis of Mt. Wilson 60' solar tower high spatial resolution observations. The reduction of the 100-day-long summer of 1990 observation campaign in terms of rotational splittings was completed leading to an excess of 600,000 splittings. The analysis of these splittings lead to a new inference of the solar internal rotation rate as a function of depth and latitude.
Analysis of the Pre-stack Split-Step Migration Operator Using Ritz Values
NASA Astrophysics Data System (ADS)
Kaplan, S. T.; Sacchi, M. D.
2009-05-01
The Born approximation for the acoustic wave-field is often used as a basis for developing algorithms in seismic imaging (migration). The approximation is linear, and, as such, can be written as a matrix-vector multiplication (Am=d). In the seismic imaging problem, d is seismic data (the recorded wave-field), and we aim to find the seismic reflectivity m (a representation of earth structure and properties) so that Am=d is satisfied. This is the often studied inverse problem of seismic migration, where given A and d, we solve for m. This can be done in a least-squares sense, so that the equation of interest is, AHAm = AHd. Hence, the solution m is largely dependent on the properties of AHA. The imaging Jacobian J provides an approximation to AHA, so that J-1AHA is, in a broad sense, better behaved then AHA. We attempt to quantify this last statement by providing an analysis of AHA and J-1AHA using their Ritz values, and for the particular case where A is built using a pre-stack split-step migration algorithm. Typically, one might try to analyze the behaviour of these matrices using their eigenvalue spectra. The difficulty in the analysis of AHA and J-1AHA lie in their size. For example, a subset of the relatively small Marmousi data set makes AHA a complex valued matrix with, roughly, dimensions of 45 million by 45 million (requiring, in single-precision, about 16 Peta-bytes of computer memory). In short, the size of the matrix makes its eigenvalues difficult to compute. Instead, we compute the leading principal minors of similar tridiagonal matrices, Bk=Vk-1AHAVk and Ck = Uk-1 J-1 AHAUk. These can be constructed using, for example, the Lanczos decomposition. Up to some value of k it is feasible to compute the eigenvalues of Bk and Ck which, in turn, are the Ritz values of, respectively, AHA and J-1 AHA, and may allow us to make quantitative statements about their behaviours.
1987-10-01
34 Proceedings of the 16th JANNAF Com- bustion Meeting, Sept. 1979, Vol. II, pp. 13-34. 44. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition...34 Proceedings of the 19th JANNAF Combustion Meeting, Oct. 1982. 47. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition Data: Ac- tivation...the surface of the propellant. This is consis- tent with the decomposition mechanism considered by Boggs[48] and Schroeder [43J. They concluded that the
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Soil Decomposition of Added Organic C in an Organic Farming System
NASA Astrophysics Data System (ADS)
Kpomblekou-A, Kokoasse; Sissoko, Alassane; McElhenney, Wendell
2015-04-01
In the United States, large quantities of poultry waste are added every year to soil under organic management. Decomposition of the added organic C releases plant nutrients, promotes soil structure, and plays a vital role in the soil food web. In organic agriculture the added C serves as the only source of nutrients for plant growth. Thus understanding the decomposition rates of such C in organic farming systems are critical in making recommendations of organic inputs to organic producers. We investigated and compared relative accumulation and decomposition of organic C in an organic farming system trial at the George Washington Carver Agricultural Experiment Station at Tuskegee, Alabama on a Marvyn sandy loam (fine-loamy, kaolinitic, thermic, Typic Kanhapludults) soil. The experimental design was a randomized complete block with four replicates and four treatments. The main plot (54' × 20') was split into three equal subplots to plant three sweet potato cultivars. The treatments included a weed (control with no cover crop, no fertilizer), crimson clover alone (CC), crimson clover plus broiler litter (BL), and crimson clover plus NPK mineral fertilizers (NPK). For five years, late in fall, the field was planted with crimson clover (Trifolium incarnatum L) that was cut with a mower and incorporated into soil the following spring. Moreover, broiler litter (4.65 Mg ha-1) or ammonium nitrate (150 kg N ha-1), triple super phosphate (120 kg P2O5 ha-1), and potassium chloride (160 kg K2O ha-1) were applied to the BL or the NPK plot and planted with sweet potato. Just before harvest, six soil samples were collected within the two middle rows of each sweet potato plot with an auger at incremental depths of 0-1, 1-2, 2-3, 3-5, 5-10, and 10-15 cm. Samples from each subplot and depth were composited and mixed in a plastic bag. The samples were sieved moist through a
Chen, Jing-Yin; Kim, Minseob; Yoo, Choong-Shik; Dattelbaum, Dana M; Sheffield, Stephen
2010-06-07
We have studied the pressure-induced phase transition and chemical decomposition of hydrogen peroxide and its mixtures with water to 50 GPa, using confocal micro-Raman and synchrotron x-ray diffractions. The x-ray results indicate that pure hydrogen peroxide crystallizes into a tetragonal structure (P4(1)2(1)2), the same structure previously found in 82.7% H(2)O(2) at high pressures and in pure H(2)O(2) at low temperatures. The tetragonal phase (H(2)O(2)-I) is stable to 15 GPa, above which transforms into an orthorhombic structure (H(2)O(2)-II) over a relatively large pressure range between 13 and 18 GPa. Inferring from the splitting of the nu(s)(O-O) stretching mode, the phase I-to-II transition pressure decreases in diluted H(2)O(2) to around 7 GPa for the 41.7% H(2)O(2) and 3 GPa for the 9.5%. Above 18 GPa H(2)O(2)-II gradually decomposes to a mixture of H(2)O and O(2), which completes at around 40 GPa for pure and 45 GPa for the 9.5% H(2)O(2). Upon pressure unloading, H(2)O(2) also decomposes to H(2)O and O(2) mixtures across the melts, occurring at 2.5 GPa for pure and 1.5 GPa for the 9.5% mixture. At H(2)O(2) concentrations below 20%, decomposed mixtures form oxygen hydrate clathrates at around 0.8 GPa--just after H(2)O melts. The compression data of pure H(2)O(2) and the stability data of the mixtures seem to indicate that the high-pressure decomposition is likely due to the pressure-induced densification, whereas the low-pressure decomposition is related to the heterogeneous nucleation process associated with H(2)O(2) melting.
A new index for the wintertime southern hemispheric split jet
NASA Astrophysics Data System (ADS)
Babian, Stella; Grieger, Jens; Cubasch, Ulrich
2018-05-01
One of the most prominent asymmetric features of the southern hemispheric (SH) circulation is the split jet over Australia and New Zealand in austral winter. Previous studies have developed indices to detect the degree to which the upper-level midlatitude westerlies are split and investigated the relationship between split events and the low-frequency teleconnection patterns, viz. the Antarctic Oscillation (AAO) and the El Niño-Southern Oscillation (ENSO). As the results were inconsistent, the relationship between the wintertime SH split jet and the climate variability indices remains unresolved and is the focus of this study. Until now, all split indices' definitions were based on the specific region where the split jet is recognizable. We consider the split jet as hemispheric rather than a regional feature and propose a new, hemispherical index that is based on the principal components (PCs) of the zonal wind field for the SH winter. A linear combination of PC2 and PC3 of the anomalous monthly (JAS) zonal wind is used to identify split-jet conditions. In a subsequent correlation analysis, our newly defined PC-based split index (PSI) indicates a strong coherence with the AAO. However, this significant relationship is unstable over the analysis period; during the 1980s, the AAO amplitude was higher than the PSI, and vice versa in the 1990s. It is probable that the PSI, as well as the AAO, underlie low-frequency variability on the decadal to centennial timescales, but the analyzed period is too short to draw these conclusions. A regression analysis with the Multivariate ENSO Index points to a nonlinear relationship between PSI and ENSO; i.e., split jets occur during both strong positive and negative phases of ENSO but rarely under normal
conditions. The Pacific South American (PSA) patterns, defined as the second and third modes of the geopotential height variability at 500 hPa, correlate poorly with the PSI (rPSA - 1 ≈ 0.2 and rPSA - 2 = 0.06), but significantly with the individual components (PCs) of the PSI, revealing an indirect influence on the SH split-jet variability. Our study suggests that the wintertime SH split jet is strongly associated with the AAO, while ENSO is to a lesser extent connected to the PSI. We conclude that a positive AAO phase, as well as both flavors of ENSO and the PSA-1 pattern produce favorable conditions for a SH split event.
Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang
2015-12-01
The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Li, Mingzhe; Wang, Liyan; Qu, Erhu; Yi, Zhuo
2018-03-01
A novel high-pressure belt-type die with a split-type cylinder is investigated with respect to extending its lifetime and improving its pressure bearing capacity. Specifically, a tungsten carbide cylinder is split into several parts along the radial direction with a prism-type cavity. In this paper, the cylinders with different split numbers are chosen to study the stress distribution and compare them with the traditional belt-type die. The simulation results indicate that the split cylinder has much smaller stress than those in the belt-type cylinder, and the statistical analysis reveals that the split-pressure cylinder is able to bear higher pressure. Experimental tests also show that the high-pressure die with a split cylinder and prism cavity has a stronger pressure-bearing capacity than a belt-type die. The split cylinder has advantages of easy manufacturing, high pressure bearing capacity, and replaceable performance.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions
NASA Astrophysics Data System (ADS)
Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin
2017-01-01
We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.
Yang, Cheng-Quan; Liu, Yong-Zhong; An, Ji-Cui; Li, Shuang; Jin, Long-Fei; Zhou, Gao-Feng; Wei, Qing-Jiang; Yan, Hui-Qing; Wang, Nan-Nan; Fu, Li-Na; Liu, Xiao; Hu, Xiao-Mei; Yan, Ting-Shuai; Peng, Shu-Ang
2013-01-01
Corky split vein caused by boron (B) deficiency in 'Newhall' Navel Orange was studied in the present research. The boron-deficient citrus exhibited a symptom of corky split vein in mature leaves. Morphologic and anatomical surveys at four representative phases of corky split veins showed that the symptom was the result of vascular hypertrophy. Digital gene expression (DGE) analysis was performed based on the Illumina HiSeq™ 2000 platform, which was applied to analyze the gene expression profilings of corky split veins at four morphologic phases. Over 5.3 million clean reads per library were successfully mapped to the reference database and more than 22897 mapped genes per library were simultaneously obtained. Analysis of the differentially expressed genes (DEGs) revealed that the expressions of genes associated with cytokinin signal transduction, cell division, vascular development, lignin biosynthesis and photosynthesis in corky split veins were all affected. The expressions of WOL and ARR12 involved in the cytokinin signal transduction pathway were up-regulated at 1(st) phase of corky split vein development. Furthermore, the expressions of some cell cycle genes, CYCs and CDKB, and vascular development genes, WOX4 and VND7, were up-regulated at the following 2(nd) and 3(rd) phases. These findings indicated that the cytokinin signal transduction pathway may play a role in initiating symptom observed in our study.
Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko
2015-01-01
We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605
Parallel processing methods for space based power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1993-01-01
This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.
NASA Astrophysics Data System (ADS)
Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.
2011-06-01
Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.
NASA Astrophysics Data System (ADS)
Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.
2012-04-01
The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salo, Heikki; Laurikainen, Eija; Laine, Jarkko
The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.
ERIC Educational Resources Information Center
Harris, Arlo D.; Kalbus, Lee H.
1979-01-01
Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)
Generalized decompositions of dynamic systems and vector Lyapunov functions
NASA Astrophysics Data System (ADS)
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
Velasquez, Alejandra E; Castro, Fidel O; Veraguas, Daniel; Cox, Jose F; Lara, Evelyn; Briones, Mario; Rodriguez-Alvarez, Lleretny
2016-02-01
Embryo splitting might be used to increase offspring yield and for molecular analysis of embryo competence. How splitting affects developmental potential of embryos is unknown. This research aimed to study the effect of bovine blastocyst splitting on morphological and gene expression homogeneity of demi-embryos and on embryo competence during elongation. Grade I bovine blastocyst produced in vitro were split into halves and distributed in nine groups (3 × 3 setting according to age and stage before splitting; age: days 7-9; stage: early, expanded and hatched blastocysts). Homogeneity and survival rate in vitro after splitting (12 h, days 10 and 13) and the effect of splitting on embryo development at elongation after embryo transfer (day 17) were assessed morphologically and by RT-qPCR. The genes analysed were OCT4, SOX2, NANOG, CDX2, TP1, TKDP1, EOMES, and BAX. Approximately 90% of split embryos had a well conserved defined inner cell mass (ICM), 70% of the halves had similar size with no differences in gene expression 12 h after splitting. Split embryos cultured further conserved normal and comparable morphology at day 10 of development; this situation changes at day 13 when embryo morphology and gene expression differed markedly among demi-embryos. Split and non-split blastocysts were transferred to recipient cows and were recovered at day 17. Fifty per cent of non-split embryos were larger than 100 mm (33% for split embryos). OCT4, SOX2, TP1 and EOMES levels were down-regulated in elongated embryos derived from split blastocysts. In conclusion, splitting day-8 blastocysts yields homogenous demi-embryos in terms of developmental capability and gene expression, but the initiation of the filamentous stage seems to be affected by the splitting.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Looking at and looking away. Etiology of preoedipal splitting in a deaf girl.
Abrams, D M
1991-01-01
This paper presents the analysis of a profoundly deaf 12-year-old girl. The main referring symptom was a pattern of intense libidinal looking at and aggressive looking away from others, which functioned as a preoedipal splitting to keep apart opposite "all good," life-enhancing and "all bad," deathlike self and object representations. The preoedipal splitting seemed to have its etiology in traumatic experiences in the symbiotic and rapprochement phases. The analysis followed the sequence of separation-individuation, as blocks to development were progressively removed.
An Intuitive Graphical Approach to Understanding the Split-Plot Experiment
ERIC Educational Resources Information Center
Robinson, Timothy J.; Brenneman, William A.; Myers, William R.
2009-01-01
While split-plot designs have received considerable attention in the literature over the past decade, there seems to be a general lack of intuitive understanding of the error structure of these designs and the resulting statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via "expected…
Augmenting the decomposition of EMG signals using supervised feature extraction techniques.
Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S
2012-01-01
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Wieder, W. R.
2012-12-01
Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.
ERIC Educational Resources Information Center
Schizas, Dimitrios; Katrana, Evagelia; Stamou, George
2013-01-01
In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…
NASA Astrophysics Data System (ADS)
Latifi, Koorosh; Kaviani, Ayoub; Rümpker, Georg; Mahmoodabadi, Meysam; Ghassemi, Mohammad R.; Sadidkhouy, Ahmad
2018-05-01
The contribution of crustal anisotropy to the observation of SKS splitting parameters is often assumed to be negligible. Based on synthetic models, we show that the impact of crustal anisotropy on the SKS splitting parameters can be significant even in the case of moderate to weak anisotropy within the crust. In addition, real-data examples reveal that significant azimuthal variations in SKS splitting parameters can be caused by crustal anisotropy. Ps-splitting analysis of receiver functions (RF) can be used to infer the anisotropic parameters of the crust. These crustal splitting parameters may then be used to constrain the inversion of SKS apparent splitting parameters to infer the anisotropy of the mantle. The observation of SKS splitting for different azimuths is indispensable to verify the presence or absence of multiple layers of anisotropy beneath a seismic station. By combining SKS and RF observations in different azimuths at a station, we are able to uniquely decipher the anisotropic parameters of crust and upper mantle.
Microbial ecological succession during municipal solid waste decomposition.
Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A
2018-04-28
The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.
2014-04-01
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael
2014-01-01
Background Economically, Leuconostoc lactis is one of the most important species in the genus Leuconostoc. It plays an important role in the food industry including the production of dextrans and bacteriocins. Currently, traditional molecular typing approaches for characterisation of this species at the isolate level are either unavailable or are not sufficiently reliable for practical use. Multilocus sequence typing (MLST) is a robust and reliable method for characterising bacterial and fungal species at the molecular level. In this study, a novel MLST protocol was developed for 50 L. lactis isolates from Mongolia and China. Results Sequences from eight targeted genes (groEL, carB, recA, pheS, murC, pyrG, rpoB and uvrC) were obtained. Sequence analysis indicated 20 different sequence types (STs), with 13 of them being represented by a single isolate. Phylogenetic analysis based on the sequences of eight MLST loci indicated that the isolates belonged to two major groups, A (34 isolates) and B (16 isolates). Linkage disequilibrium analyses indicated that recombination occurred at a low frequency in L. lactis, indicating a clonal population structure. Split-decomposition analysis indicated that intraspecies recombination played a role in generating genotypic diversity amongst isolates. Conclusions Our results indicated that MLST is a valuable tool for typing L. lactis isolates that can be used for further monitoring of evolutionary changes and population genetics. PMID:24912963
Raut, Savita V; Yadav, Dinkar M
2018-03-28
This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.
Split-plot microarray experiments: issues of design, power and sample size.
Tsai, Pi-Wen; Lee, Mei-Ling Ting
2005-01-01
This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.
1985-09-01
larger than the net energies of reaction for the same transitions ) represent energy needed for "freeing-up" of HMX or RDX molecules 70E. R. Lee, R. H...FACTORS FOR HMX AND RDX DECOMPOSITION Michael A. Schroeder DT!C .AECTE September 1985 SEP 3 0 8 * APPROVED FOR PUBUC RELEASE; DISTIR!UTION UNLIMITED. US...Final Activation Energies and Frequency Factors for HMX and RDX Decomposition b PERFORMING ORG. REPORT N, %1ER 7. AUTHOR(@) 6 CONTRACT OR GRANT NuMP
About decomposition approach for solving the classification problem
NASA Astrophysics Data System (ADS)
Andrianova, A. A.
2016-11-01
This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.
1980-12-01
NOTES 3 19. KEY WORDS (Continue on revere side If n.cessary d Identify by block number) Bulk cargo Market demand analysis Commodity resource inventory...The study included a Commodity Resource Inventory, a Modal Split Analysis and a Market Demand Analysis. The work included investigation and analyses...inventory, a modal split analysis and a market demand analysis. The work included investigation and analyses of the production, transportation, and
Exposing the QCD Splitting Function with CMS Open Data.
Larkoski, Andrew; Marzani, Simone; Thaler, Jesse; Tripathee, Aashish; Xue, Wei
2017-09-29
The splitting function is a universal property of quantum chromodynamics (QCD) which describes how energy is shared between partons. Despite its ubiquitous appearance in many QCD calculations, the splitting function cannot be measured directly, since it always appears multiplied by a collinear singularity factor. Recently, however, a new jet substructure observable was introduced which asymptotes to the splitting function for sufficiently high jet energies. This provides a way to expose the splitting function through jet substructure measurements at the Large Hadron Collider. In this Letter, we use public data released by the CMS experiment to study the two-prong substructure of jets and test the 1→2 splitting function of QCD. To our knowledge, this is the first ever physics analysis based on the CMS Open Data.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Borkotoky, Shasanka Sekhar; Dhar, Prodyut; Katiyar, Vimal
2018-01-01
This article addresses the elegant and green approach for fabrication of bio-based poly (lactic acid) (PLA)/cellulose nanocrystal (CNCs) bionanocomposite foam (PLA/CNC) with cellular morphology and hydrophobic surface behavior. Highly porous (porosity >80%) structure is obtained with interconnected pores and the effect of CNCs in the cell density (N f ) and cell size of foams are thoroughly investigated by morphological analysis. The thermo-mechanical investigations are performed for the foam samples and almost ∼1.7 and ∼2.2 fold increase in storage modulus is observed for the compressive and tensile mode respectively. PLA/CNC based bionanocomposite foams displayed similar thermal stability as base PLA foam. Detailed investigations of decomposition behavior are studied by using hyphenated thermogravimetric analysis-fourier transmission infrared spectroscopy (TGA-FTIR) system. Almost ∼13% increment is observed in crystallinity at highest loading of CNCs compared to neat counterpart. To investigate the splitting and spreading phenomenon of the wettability of the samples, linear model is used to find the Young's contact angle and contact angle hysteresis (CAH). Besides, ∼6.1 folds reduction in the density of PLA and the nanocomposite foams compared to PLA carries much significance in specialized application areas where weight is an important concern. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rama Krishna, K.; Ramachandran, K. I.
2018-02-01
Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Somogyi, O; Meskó, A; Csorba, L; Szabó, P; Zelkó, R
2017-08-30
The division of tablets and adequate methods of splitting them are a complex problem in all sectors of health care. Although tablet-splitting is often required, this procedure can be difficult for patients. Four tablets were investigated with different external features (shape, score-line, film-coat and size). The influencing effect of these features and the splitting methods was investigated according to the precision and "weight loss" of splitting techniques. All four types of tablets were halved by four methods: by hand, with a kitchen knife, with an original manufactured splitting device and with a modified tablet splitter based on a self-developed mechanical model. The mechanical parameters (harness and friability) of the products were measured during the study. The "weight loss" and precision of splitting methods were determined and compared by statistical analysis. On the basis of the results, the external features (geometry), the mechanical parameters of tablets and the mechanical structure of splitting devices can influence the "weight loss" and precision of tablet-splitting. Accordingly, a new decision-making scheme was developed for the selection of splitting methods. In addition, the skills of patients and the specialties of therapy should be considered so that pharmaceutical counselling can be more effective regarding tablet-splitting. Copyright © 2017 Elsevier B.V. All rights reserved.
Entropy Analysis of Kinetic Flux Vector Splitting Schemes for the Compressible Euler Equations
NASA Technical Reports Server (NTRS)
Shiuhong, Lui; Xu, Jun
1999-01-01
Flux Vector Splitting (FVS) scheme is one group of approximate Riemann solvers for the compressible Euler equations. In this paper, the discretized entropy condition of the Kinetic Flux Vector Splitting (KFVS) scheme based on the gas-kinetic theory is proved. The proof of the entropy condition involves the entropy definition difference between the distinguishable and indistinguishable particles.
ERIC Educational Resources Information Center
Bressmann, Tim
2006-01-01
In the cosmetic tongue split operation, the anterior tongue blade is split along the midline of the tongue. The goal of this case study was to obtain preliminary data on speech and tongue motility in a participant who had performed this operation on himself. The participant underwent an articulation test and a tongue motility assessment, as well…
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Aleksey
2014-05-01
A modeling technology based on coupled models of atmospheric dynamics and chemistry are presented [1-3]. It is the result of application of variational methods in combination with the methods of decomposition and splitting. The idea of Euler's integrating factors combined with technique of adjoint problems is also used. In online technologies, a significant part of algorithmic and computational work consist in solving the problems like convection-diffusion-reaction and in organizing data assimilation techniques based on them. For equations of convection-diffusion, the methodology gives us the unconditionally stable and monotone discrete-analytical schemes in the frames of methods of decomposition and splitting. These schemes are exact for locally one-dimensional problems respect to the spatial variables. For stiff systems of equations describing transformation of gas and aerosol substances, the monotone and stable schemes are also obtained. They are implemented by non- iterative algorithms. By construction, all schemes for different components of state functions are structurally uniform. They are coordinated among themselves in the sense of forward and inverse modeling. Variational principles are constructed taking into account the fact that the behavior of the different dynamic and chemical components of the state function is characterized by high variability and uncertainty. Information on the parameters of models, sources and emission impacts is also not determined precisely. Therefore, to obtain the consistent solutions, we construct methods of the sensitivity theory taking into account the influence of uncertainty. For this purpose, new methods of data assimilation of hydrodynamic fields and gas-aerosol substances measured by different observing systems are proposed. Optimization criteria for data assimilation problems are defined so that they include a set of functionals evaluating the total measure of uncertainties. The latter are explicitly introduced into the equations of the model of processes as desired deterministic control functions. This method of data assimilation with control functions is implemented by direct algorithms. The modeling technology presented here focuses on various scientific and applied problems of environmental prediction and design, including risk assessment in relation to existing and potential sources of natural and anthropogenic influences. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS; by RFBR projects NN 11-01-00187 and 14-01-31482; by Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura. Direct and Inverse Problems in a Variational Concept of Environmental Modeling, Pure and Applied Geoph. 2012.V.169:447-465. 2. A.V. Penenko, Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods, Numerical Analysis and Applications, 2012. V. 5:326-341. 3. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models, Numerical analysis and applications, 2013. V. 6: 210-220.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
Stable, non-dissipative, and conservative flux-reconstruction schemes in split forms
NASA Astrophysics Data System (ADS)
Abe, Yoshiaki; Morinaka, Issei; Haga, Takanori; Nonomura, Taku; Shibata, Hisaichi; Miyaji, Koji
2018-01-01
A stable, non-dissipative, and conservative flux-reconstruction (FR) scheme is constructed and demonstrated for the compressible Euler and Navier-Stokes equations. A proposed FR framework adopts a split form (also known as the skew-symmetric form) for convective terms. Sufficient conditions to satisfy both the primary conservation (PC) and kinetic energy preservation (KEP) properties are rigorously derived by polynomial-based analysis for a general FR framework. It is found that the split form needs to be expressed in the PC split form or KEP split form to satisfy each property in discrete sense. The PC split form is retrieved from existing general forms (Kennedy and Gruber [33]); in contrast, we have newly introduced the KEP split form as a comprehensive form constituting a KEP scheme in the FR framework. Furthermore, Gauss-Lobatto (GL) solution points and g2 correction function are required to satisfy the KEP property while any correction functions are available for the PC property. The split-form FR framework to satisfy the KEP property, eventually, is similar to the split-form DGSEM-GL method proposed by Gassner [23], but which, in this study, is derived solely by polynomial-based analysis without explicitly using the diagonal-norm SBP property. Based on a series of numerical tests (e.g., Sod shock tube), both the PC and KEP properties have been verified. We have also demonstrated that using a non-dissipative KEP flux, a sixteenth-order (p15) simulation of the viscous Taylor-Green vortex (Re = 1 , 600) is stable and its results are free of unphysical oscillations on relatively coarse mesh (total number of degrees of freedom (DoFs) is 1283).
Average structure and M2 site configurations in C2/c clinopyroxenes along the Di-En join
NASA Astrophysics Data System (ADS)
Tribaudino, M.; Benna, P.; Bruno, E.
1989-12-01
In order to clarify the structural configurations observed in Diss in the Ca-rich region of the Di-En join (in which TEM observations show neither exsolution microstructures nor evidence of spinodal decomposition) single crystals large enough for X-ray diffraction analyses, with composition (Ca0.66Mg0.34)MgSi2O6, have been equilibrated close to the solvus at T=1350° C for 317 h, and quenched at room temperature. The refinement in C2/c space group shows that in the M2 site Ca and Mg are fully ‘ordered’ in two split positions (M2occ: 0.66 Ca; M2'occ: 0.34 Mg). Since the average structure shows a relevant elongation of anisotropic thermal ellipsoids of the O2 and O3 oxygen atoms, the refinement has been carried out according to a split model for O2 and O3 atoms: Ca appears 8-coordinated (as in diopside) and Mg shows a sixfold coordination similar to that of high-pigeonite. This coordination for Mg is significantly different from the fourfold coordination (Zn-like in Zn-cpx) proposed previously and it is a more probable coordination for Mg from a crystalchemical point of view. The same results were obtained refining a Di80En20 cpx, equilibrated at T=1230° C, according to the same O-split model. The data support the coexistence of a Di-like configuration for Ca and of a highPig-like configuration for Mg away from the solvus also. At T very near to T solidus the different configurations, observed at room temperature in the quenched samples, should converge and Ca and Mg should retain a single disordered configuration in the M2 site.
A hybrid method with deviational particles for spatial inhomogeneous plasma
NASA Astrophysics Data System (ADS)
Yan, Bokai
2016-03-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.
Multi-partitioning for ADI-schemes on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1994-01-01
A kind of discrete-operator splitting called Alternating Direction Implicit (ADI) has been found to be useful in simulating fluid flow problems. In particular, it is being used to study the effects of hot exhaust jets from high performance aircraft on landing surfaces. Decomposition techniques that minimize load imbalance and message-passing frequency are described. Three strategies that are investigated for implementing the NAS Scalar Penta-diagonal Parallel Benchmark (SP) are transposition, pipelined Gaussian elimination, and multipartitioning. The multipartitioning strategy, which was used on Ethernet, was found to be the most efficient, although it was considered only a moderate success because of Ethernet's limited communication properties. The efficiency derived largely from the coarse granularity of the strategy, which reduced latencies and allowed overlap of communication and computation.
Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M
2017-10-25
Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.
Geist, Barbara Katharina; Dobrozemsky, Georg; Samal, Martin; Schaffarich, Michael P; Sinzinger, Helmut; Staudenherz, Anton
2015-12-01
The split or differential renal function is the most widely accepted quantitative parameter derived from radionuclide renography. To examine the intercenter variance of this parameter, we designed a worldwide round robin test. Five selected dynamic renal studies have been distributed all over the world by e-mail. Three of these studies are anonymized patient data acquired using the EANM standardized protocol and two studies are phantom studies. In a simple form, individual participants were asked to measure renal split function as well as to provide additional information such as data analysis software, positioning of background region of interest, or the method of calculation. We received the evaluation forms from 34 centers located in 21 countries. The analysis of the round robin test yielded an overall z-score of 0.3 (a z-score below 1 reflecting a good result). However, the z-scores from several centers were unacceptably high, with values greater than 3. In particular, the studies with impaired renal function showed a wide variance. A wide variance in the split renal function was found in patients with impaired kidney function. This study indicates the ultimate importance of quality control and standardization of the measurement of the split renal function. It is especially important with respect to the commonly accepted threshold for significant change in split renal function by 10%.
Tu, Jun-Ling; Yuan, Jiao-Jiao
2018-02-13
The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.
Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen; Minchew, Brent; Holt, Benjamin
2011-01-01
We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity
1980-12-01
SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on reverse aide if neceeary aod identify by block number) Bulk cargo Market demand analysis Iron Commodity resource...shown below. The study included a Commodity Resource Inventory, a Modal Split Analysis and a Market Demand Analysis. The work included investigation...resource inventory, a modal split analysis and a market demand analysis. The work included investigation and analyses of the production
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho
2014-01-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880
Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.
Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K
2009-12-03
The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.
Dual energy CT: How well can pseudo-monochromatic imaging reduce metal artifacts?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchenbecker, Stefan, E-mail: stefan.kuchenbecker@dkfz.de; Faby, Sebastian; Sawall, Stefan
2015-02-15
Purpose: Dual Energy CT (DECT) provides so-called monoenergetic images based on a linear combination of the original polychromatic images. At certain patient-specific energy levels, corresponding to certain patient- and slice-dependent linear combination weights, e.g., E = 160 keV corresponds to α = 1.57, a significant reduction of metal artifacts may be observed. The authors aimed at analyzing the method for its artifact reduction capabilities to identify its limitations. The results are compared with raw data-based processing. Methods: Clinical DECT uses a simplified version of monochromatic imaging by linearly combining the low and the high kV images and by assigning an energymore » to that linear combination. Those pseudo-monochromatic images can be used by radiologists to obtain images with reduced metal artifacts. The authors analyzed the underlying physics and carried out a series expansion of the polychromatic attenuation equations. The resulting nonlinear terms are responsible for the artifacts, but they are not linearly related between the low and the high kV scan: A linear combination of both images cannot eliminate the nonlinearities, it can only reduce their impact. Scattered radiation yields additional noncanceling nonlinearities. This method is compared to raw data-based artifact correction methods. To quantify the artifact reduction potential of pseudo-monochromatic images, they simulated the FORBILD abdomen phantom with metal implants, and they assessed patient data sets of a clinical dual source CT system (100, 140 kV Sn) containing artifacts induced by a highly concentrated contrast agent bolus and by metal. In each case, they manually selected an optimal α and compared it to a raw data-based material decomposition in case of simulation, to raw data-based material decomposition of inconsistent rays in case of the patient data set containing contrast agent, and to the frequency split normalized metal artifact reduction in case of the metal implant. For each case, the contrast-to-noise ratio (CNR) was assessed. Results: In the simulation, the pseudo-monochromatic images yielded acceptable artifact reduction results. However, the CNR in the artifact-reduced images was more than 60% lower than in the original polychromatic images. In contrast, the raw data-based material decomposition did not significantly reduce the CNR in the virtual monochromatic images. Regarding the patient data with beam hardening artifacts and with metal artifacts from small implants the pseudo-monochromatic method was able to reduce the artifacts, again with the downside of a significant CNR reduction. More intense metal artifacts, e.g., as those caused by an artificial hip joint, could not be suppressed. Conclusions: Pseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them. In all cases, the CNR is significantly reduced, thereby rendering the method questionable, unless special post processing algorithms are implemented to restore the high CNR from the original images (e.g., by using a frequency split technique). Raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.« less
SciSpark: In-Memory Map-Reduce for Earth Science Algorithms
NASA Astrophysics Data System (ADS)
Ramirez, P.; Wilson, B. D.; Whitehall, K. D.; Palamuttam, R. S.; Mattmann, C. A.; Shah, S.; Goodman, A.; Burke, W.
2016-12-01
We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based Apache Hadoop by 100x in memory and by 10x on disk. SciSpark extends Spark to support Earth Science use in three ways: Efficient ingest of N-dimensional geo-located arrays (physical variables) from netCDF3/4, HDF4/5, and/or OPeNDAP URLS; Array operations for dense arrays in scala and Java using the ND4S/ND4J or Breeze libraries; Operations to "split" datasets across a Spark cluster by time or space or both. For example, a decade-long time-series of geo-variables can be split across time to enable parallel "speedups" of analysis by day, month, or season. Similarly, very high-resolution climate grids can be partitioned into spatial tiles for parallel operations across rows, columns, or blocks. In addition, using Spark's gateway into python, PySpark, one can utilize the entire ecosystem of numpy, scipy, etc. Finally, SciSpark Notebooks provide a modern eNotebook technology in which scala, python, or spark-sql codes are entered into cells in the Notebook and executed on the cluster, with results, plots, or graph visualizations displayed in "live widgets". We have exercised SciSpark by implementing three complex Use Cases: discovery and evolution of Mesoscale Convective Complexes (MCCs) in storms, yielding a graph of connected components; PDF Clustering of atmospheric state using parallel K-Means; and statistical "rollups" of geo-variables or model-to-obs. differences (i.e. mean, stddev, skewness, & kurtosis) by day, month, season, year, and multi-year. Geo-variables are ingested and split across the cluster using methods on the sciSparkContext object including netCDFVariables() for spatial decomposition and wholeNetCDFVariables() for time-series. The presentation will cover the architecture of SciSpark, the design of the scientific RDD (sRDD) data structures for N-dim. arrays, results from the three science Use Cases, example Notebooks, lessons learned from the algorithm implementations, and parallel performance metrics.
Herden, Uta; Fischer, Lutz; Koch, Martina; Li, Jun; Achilles, Eike-Gert; Nashan, Björn
2018-05-20
When a sufficiently high-quality liver is available, classic liver graft splitting is performed. In such cases, a small child receives the left-lateral split graft, with subsequent transplantation of the right-extended graft in an adult. We analysed 64 patients who received right-extended liver grafts from 2007-2015, and compared outcomes between cases of external versus in-house graft splitting. We found excellent donor data and comparable recipient characteristics. Cold ischemic time was significantly longer for external (14±2 h; n=38) versus internal (12±2 h; n=26) liver graft splitting. Compared to the internal splitting group, the external liver graft splitting group showed significantly reduced 1- and 5-year patient survival (100% versus 84%; P=.035) and higher rates of biliary (24% versus 12%) and vascular (8% versus 0%) complications. The outcomes following right-extended split LTX are disappointing given the excellent organ quality. External liver graft splitting was associated with worse outcome and surgical complication rates. This may be related to the prolonged cold ischemic time due to two-fold transportation, as well as the ignorance of the splitting procedure details and related pitfalls. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Technical Reports Server (NTRS)
Rhodes, Edward J., Jr.; Cacciani, Alessandro; Korzennik, Sylvain G.
1988-01-01
The initial frequency splitting results of solar p-mode oscillations obtained from the 1988 helioseismology program at the Mt. Wilson Observatory are presented. The frequency splittings correspond to the rotational splittings of sectoral harmonics which range in degree between 10 and 598. They were obtained from a cross-correlation analysis of the prograde and retrograde portions of a two-dimensional (t - v) power spectrum. This power spectrum was computed from an eight-hour sequence of full-disk Dopplergrams obtained on July 2, 1988, at the 60-foot tower telescope with a Na magneto-optical filter and a 1024x1024 pixel CCD camera. These frequency splittings have an inherently larger scatter than did the splittings obtained from earlier 16-day power spectra. These splittings are consistent with an internal solar rotational velocity which is independent of radius along the equatorial plane. The normalized frequency splittings averaged 449 + or - 3 nHz, a value which is very close to the observed equatorial rotation rate of the photospheric gas of 451.7 nHz.
An analysis of scatter decomposition
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Perfluoropolyalkylether decomposition on catalytic aluminas
NASA Technical Reports Server (NTRS)
Morales, Wilfredo
1994-01-01
The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Baskaran, Preetisri; Hyvönen, Riitta; Berglund, S Linnea; Clemmensen, Karina E; Ågren, Göran I; Lindahl, Björn D; Manzoni, Stefano
2017-02-01
Tree growth in boreal forests is limited by nitrogen (N) availability. Most boreal forest trees form symbiotic associations with ectomycorrhizal (ECM) fungi, which improve the uptake of inorganic N and also have the capacity to decompose soil organic matter (SOM) and to mobilize organic N ('ECM decomposition'). To study the effects of 'ECM decomposition' on ecosystem carbon (C) and N balances, we performed a sensitivity analysis on a model of C and N flows between plants, SOM, saprotrophs, ECM fungi, and inorganic N stores. The analysis indicates that C and N balances were sensitive to model parameters regulating ECM biomass and decomposition. Under low N availability, the optimal C allocation to ECM fungi, above which the symbiosis switches from mutualism to parasitism, increases with increasing relative involvement of ECM fungi in SOM decomposition. Under low N conditions, increased ECM organic N mining promotes tree growth but decreases soil C storage, leading to a negative correlation between C stores above- and below-ground. The interplay between plant production and soil C storage is sensitive to the partitioning of decomposition between ECM fungi and saprotrophs. Better understanding of interactions between functional guilds of soil fungi may significantly improve predictions of ecosystem responses to environmental change. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Liddell, Mitch; Unsworth, Martyn; Pek, Josef
2016-06-01
Viability for the development of an engineered geothermal system (EGS) in the oilsands region near Fort McMurray, Alberta, is investigated by studying the structure of the Precambrian basement rocks with magnetotellurics (MT). MT data were collected at 94 broad-band stations on two east-west profiles. Apparent resistivity and phase data showed little variation along each profile. The short period MT data detected a 1-D resistivity structure that could be identified as the shallow sedimentary basin underlain by crystalline basement rocks to a depth of 4-5 km. At lower frequencies a strong directional dependence, large phase splits, and regions of out-of-quadrant (OOQ) phase were detected. 2-D isotropic inversions of these data failed to produce a realistic resistivity model. A detailed dimensionality analysis found links between large phase tensor skews (˜15°), azimuths, OOQ phases and tensor decomposition strike angles at periods greater than 1 s. Low magnitude induction vectors, as well as uniformity of phase splits and phase tensor character between the northern and southern profiles imply that a 3-D analysis is not necessary or appropriate. Therefore, 2-D anisotropic forward modelling was used to generate a resistivity model to interpret the MT data. The preferred model was based on geological observations of outcropping anisotropic mylonitic basement rocks of the Charles Lake shear zone, 150 km to the north, linked to the study area by aeromagnetic and core sample data. This model fits all four impedance tensor elements with an rms misfit of 2.82 on the southern profile, and 3.3 on the northern. The conductive phase causing the anisotropy is interpreted to be interconnected graphite films within the metamorphic basement rocks. Characterizing the anisotropy is important for understanding how artificial fractures, necessary for EGS development, would form. Features of MT data commonly interpreted to be 3-D (e.g. out of OOQ phase and large phase tensor skew) are shown to be interpretable with this 2-D anisotropic model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Zaug, J M; Burnham, A K
The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less
Quantitative analysis on electric dipole energy in Rashba band splitting.
Hong, Jisook; Rhim, Jun-Won; Kim, Changyoung; Ryong Park, Seung; Hoon Shim, Ji
2015-09-01
We report on quantitative comparison between the electric dipole energy and the Rashba band splitting in model systems of Bi and Sb triangular monolayers under a perpendicular electric field. We used both first-principles and tight binding calculations on p-orbitals with spin-orbit coupling. First-principles calculation shows Rashba band splitting in both systems. It also shows asymmetric charge distributions in the Rashba split bands which are induced by the orbital angular momentum. We calculated the electric dipole energies from coupling of the asymmetric charge distribution and external electric field, and compared it to the Rashba splitting. Remarkably, the total split energy is found to come mostly from the difference in the electric dipole energy for both Bi and Sb systems. A perturbative approach for long wave length limit starting from tight binding calculation also supports that the Rashba band splitting originates mostly from the electric dipole energy difference in the strong atomic spin-orbit coupling regime.
Quantitative analysis on electric dipole energy in Rashba band splitting
Hong, Jisook; Rhim, Jun-Won; Kim, Changyoung; Ryong Park, Seung; Hoon Shim, Ji
2015-01-01
We report on quantitative comparison between the electric dipole energy and the Rashba band splitting in model systems of Bi and Sb triangular monolayers under a perpendicular electric field. We used both first-principles and tight binding calculations on p-orbitals with spin-orbit coupling. First-principles calculation shows Rashba band splitting in both systems. It also shows asymmetric charge distributions in the Rashba split bands which are induced by the orbital angular momentum. We calculated the electric dipole energies from coupling of the asymmetric charge distribution and external electric field, and compared it to the Rashba splitting. Remarkably, the total split energy is found to come mostly from the difference in the electric dipole energy for both Bi and Sb systems. A perturbative approach for long wave length limit starting from tight binding calculation also supports that the Rashba band splitting originates mostly from the electric dipole energy difference in the strong atomic spin-orbit coupling regime. PMID:26323493
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
NASA Astrophysics Data System (ADS)
Lesage, A. A. J.; Smith, L. W.; Al-Taie, H.; See, P.; Griffiths, J. P.; Farrer, I.; Jones, G. A. C.; Ritchie, D. A.; Kelly, M. J.; Smith, C. G.
2015-01-01
A multiplexer technique is used to individually measure an array of 256 split gates on a single GaAs/AlGaAs heterostructure. This results in the generation of large volumes of data, which requires the development of automated data analysis routines. An algorithm is developed to find the spacing between discrete energy levels, which form due to transverse confinement from the split gate. The lever arm, which relates split gate voltage to energy, is also found from the measured data. This reduces the time spent on the analysis. Comparison with estimates obtained visually shows that the algorithm returns reliable results for subband spacing of split gates measured at 1.4 K. The routine is also used to assess direct current bias spectroscopy measurements at lower temperatures (50 mK). This technique is versatile and can be extended to other types of measurements. For example, it is used to extract the magnetic field at which Zeeman-split 1D subbands cross one another.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Effects on Text Simplification: Evaluation of Splitting up Noun Phrases
Leroy, Gondy; Kauchak, David; Hogue, Alan
2016-01-01
To help increase health literacy, we are developing a text simplification tool that creates more accessible patient education materials. Tool development is guided by data-driven feature analysis comparing simple and difficult text. In the present study, we focus on the common advice to split long noun phrases. Our previous corpus analysis showed that easier texts contained shorter noun phrases. Subsequently, we conduct a user study to measure the difficulty of sentences containing noun phrases of different lengths (2-gram, 3-gram and 4-gram), conditions (split or not) and, to simulate unknown terms, use of pseudowords (present or not). We gathered 35 evaluations for 30 sentences in each condition (3×2×2 conditions) on Amazon’s Mechanical Turk (N=12,600). We conducted a three-way ANOVA for perceived and actual difficulty. Splitting noun phrases had a positive effect on perceived difficulty but a negative effect on actual difficulty. The presence of pseudowords increased perceived and actual difficulty. Without pseudowords, longer noun phrase led to increased perceived and actual difficulty. A follow-up study using the phrases (N = 1,350) showed that measuring awkwardness may indicate when to split noun phrases. We conclude that splitting noun phrases benefits perceived difficulty, but hurts actual difficulty when the phrasing becomes less natural. PMID:27043754
Decomposition and particle release of a carbon nanotube/epoxy nanocomposite at elevated temperatures
NASA Astrophysics Data System (ADS)
Schlagenhauf, Lukas; Kuo, Yu-Ying; Bahk, Yeon Kyoung; Nüesch, Frank; Wang, Jing
2015-11-01
Carbon nanotubes (CNTs) as fillers in nanocomposites have attracted significant attention, and one of the applications is to use the CNTs as flame retardants. For such nanocomposites, possible release of CNTs at elevated temperatures after decomposition of the polymer matrix poses potential health threats. We investigated the airborne particle release from a decomposing multi-walled carbon nanotube (MWCNT)/epoxy nanocomposite in order to measure a possible release of MWCNTs. An experimental set-up was established that allows decomposing the samples in a furnace by exposure to increasing temperatures at a constant heating rate and under ambient air or nitrogen atmosphere. The particle analysis was performed by aerosol measurement devices and by transmission electron microscopy (TEM) of collected particles. Further, by the application of a thermal denuder, it was also possible to measure non-volatile particles only. Characterization of the tested samples and the decomposition kinetics were determined by the usage of thermogravimetric analysis (TGA). The particle release of different samples was investigated, of a neat epoxy, nanocomposites with 0.1 and 1 wt% MWCNTs, and nanocomposites with functionalized MWCNTs. The results showed that the added MWCNTs had little effect on the decomposition kinetics of the investigated samples, but the weight of the remaining residues after decomposition was influenced significantly. The measurements with decomposition in different atmospheres showed a release of a higher number of particles at temperatures below 300 °C when air was used. Analysis of collected particles by TEM revealed that no detectable amount of MWCNTs was released, but micrometer-sized fibrous particles were collected.
s-core network decomposition: A generalization of k-core analysis to weighted networks
NASA Astrophysics Data System (ADS)
Eidsaa, Marius; Almaas, Eivind
2013-12-01
A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.
Catalytic and inhibiting effects of lithium peroxide and hydroxide on sodium chlorate decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, J.C.; Zhang, Y.
1995-09-01
Chemical oxygen generators based on sodium chlorate and lithium perchlorate are used in airplanes, submarines, diving, and mine rescue. Catalytic decomposition of sodium chlorate in the presence of cobalt oxide, lithium peroxide, and lithium hydroxide is studied using thermal gravimetric analysis. Lithium peroxide and hydroxide are both moderately active catalysts for the decomposition of sodium chlorate when used alone, and inhibitors when used with the more active catalyst cobalt oxide.
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-09-01
Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.
On the combinatorics of sparsification.
Huang, Fenix Wd; Reidys, Christian M
2012-10-22
We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading
NASA Astrophysics Data System (ADS)
Oh, Joo Won; Lee, Won Sik; Park, Seong Jin
2018-01-01
Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.
Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A
2005-10-22
Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.
Computer-Assisted Traffic Engineering Using Assignment, Optimal Signal Setting, and Modal Split
DOT National Transportation Integrated Search
1978-05-01
Methods of traffic assignment, traffic signal setting, and modal split analysis are combined in a set of computer-assisted traffic engineering programs. The system optimization and user optimization traffic assignments are described. Travel time func...
NASA Astrophysics Data System (ADS)
Sugiura, Shinji; Ikeda, Hiroshi
2014-03-01
The decomposition of vertebrate carcasses is an important ecosystem function. Soft tissues of dead vertebrates are rapidly decomposed by diverse animals. However, decomposition of hard tissues such as hairs and feathers is much slower because only a few animals can digest keratin, a protein that is concentrated in hairs and feathers. Although beetles of the family Trogidae are considered keratin feeders, their ecological function has rarely been explored. Here, we investigated the keratin-decomposition function of trogid beetles in heron-breeding colonies where keratin was frequently supplied as feathers. Three trogid species were collected from the colonies and observed feeding on heron feathers under laboratory conditions. We also measured the nitrogen (δ15N) and carbon (δ13C) stable isotope ratios of two trogid species that were maintained on a constant diet (feathers from one heron individual) during 70 days under laboratory conditions. We compared the isotopic signatures of the trogids with the feathers to investigate isotopic shifts from the feathers to the consumers for δ15N and δ13C. We used mixing models (MixSIR and SIAR) to estimate the main diets of individual field-collected trogid beetles. The analysis indicated that heron feathers were more important as food for trogid beetles than were soft tissues under field conditions. Together, the feeding experiment and stable isotope analysis provided strong evidence of keratin decomposition by trogid beetles.
Melching, C.S.; Coupe, R.H.
1995-01-01
During water years 1985-91, the U.S. Geological Survey (USGS) and the Illinois Environmental Protection Agency (IEPA) cooperated in the collection and analysis of concurrent and split stream-water samples from selected sites in Illinois. Concurrent samples were collected independently by field personnel from each agency at the same time and sent to the IEPA laboratory, whereas the split samples were collected by USGS field personnel and divided into aliquots that were sent to each agency's laboratory for analysis. The water-quality data from these programs were examined by means of the Wilcoxon signed ranks test to identify statistically significant differences between results of the USGS and IEPA analyses. The data sets for constituents and properties identified by the Wilcoxon test as having significant differences were further examined by use of the paired t-test, mean relative percentage difference, and scattergrams to determine if the differences were important. Of the 63 constituents and properties in the concurrent-sample analysis, differences in only 2 (pH and ammonia) were statistically significant and large enough to concern water-quality engineers and planners. Of the 27 constituents and properties in the split-sample analysis, differences in 9 (turbidity, dissolved potassium, ammonia, total phosphorus, dissolved aluminum, dissolved barium, dissolved iron, dissolved manganese, and dissolved nickel) were statistically significant and large enough to con- cern water-quality engineers and planners. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between paris of split samples were compared to the precision of the laboratory method used and the interlaboratory precision of measuring a given concentration or property. Consideration of method precision indicated that differences between concurrent samples were insignificant for all concentrations and properties except pH, and that differences between split samples were significant for all concentrations and properties. Consideration of interlaboratory precision indicated that the differences between the split samples were not unusually large. The results for the split samples illustrate the difficulty in obtaining comparable and accurate water-quality data.
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
1974-06-17
10-1 I1. Burning Rate Modifiers, D.R. Dillehay ............................. 11-1 12. Spectroscopic Analysis of Azide Decomposition Products for use...solid, and Pit that they ignite a short distance from the surface. Further- more, decomposition of sodium nitrate, which produces the gas to blow the...decreasing U the thermal conductivity of the basic binary. Class 2 compounds, con- sisting of nanganese oxides, catalyze the normal decomposition of
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Data analysis using a combination of independent component analysis and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lin, Shih-Lin; Tung, Pi-Cheng; Huang, Norden E.
2009-06-01
A combination of independent component analysis and empirical mode decomposition (ICA-EMD) is proposed in this paper to analyze low signal-to-noise ratio data. The advantages of ICA-EMD combination are these: ICA needs few sensory clues to separate the original source from unwanted noise and EMD can effectively separate the data into its constituting parts. The case studies reported here involve original sources contaminated by white Gaussian noise. The simulation results show that the ICA-EMD combination is an effective data analysis tool.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Qin, E-mail: Qin_Sheng@baylor.edu; Sun, Hai-wei, E-mail: hsun@umac.mo
This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman–Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptivemore » grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.« less
Factor levels for density comparisons in the split-block spacing design
Kurt H. Riitters; Brian J. Stanton; Robbert H. Walkup
1989-01-01
The split-block spacing design is a compact test of the effects of within-row and between-row spacings. But the sometimes awkward analysis of density (i.e., trees/ha) effects may deter use of the design. The analysis is simpler if the row spacings are chosen to obtain a balanced set of equally spaced density and rectangularity treatments. A spacing study in poplar (...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jin-jian; Yancheng Teachers College, Yancheng 224002; Liu, Zu-Liang, E-mail: liuzl@mail.njust.edu.cn
2013-04-15
An energetic lead(II) coordination polymer based on the ligand ANPyO has been synthesized and its crystal structure has been got. The polymer was characterized by FT-IR spectroscopy, elemental analysis, DSC and TG-DTG technologies. Thermal analysis shows that there are one endothermic process and two exothermic decomposition stages in the temperature range of 50–600 °C with final residues 57.09%. The non-isothermal kinetic has also been studied on the main exothermic decomposition using the Kissinger's and Ozawa–Doyle's methods, the apparent activation energy is calculated as 195.2 KJ/mol. Furthermore, DSC measurements show that the polymer has significant catalytic effect on the thermal decompositionmore » of ammonium perchlorate. - Graphical abstract: An energetic lead(II) coordination polymer of ANPyO has been synthesized, structurally characterized and properties tested. Highlights: ► We have synthesized and characterized an energetic lead(II) coordination polymer. ► We have measured its molecular structure and thermal decomposition. ► It has significant catalytic effect on thermal decomposition of AP.« less
Veit, M J; Arras, R; Ramshaw, B J; Pentcheva, R; Suzuki, Y
2018-04-13
The manipulation of the spin degrees of freedom in a solid has been of fundamental and technological interest recently for developing high-speed, low-power computational devices. There has been much work focused on developing highly spin-polarized materials and understanding their behavior when incorporated into so-called spintronic devices. These devices usually require spin splitting with magnetic fields. However, there is another promising strategy to achieve spin splitting using spatial symmetry breaking without the use of a magnetic field, known as Rashba-type splitting. Here we report evidence for a giant Rashba-type splitting at the interface of LaTiO 3 and SrTiO 3 . Analysis of the magnetotransport reveals anisotropic magnetoresistance, weak anti-localization and quantum oscillation behavior consistent with a large Rashba-type splitting. It is surprising to find a large Rashba-type splitting in 3d transition metal oxide-based systems such as the LaTiO 3 /SrTiO 3 interface, but it is promising for the development of a new kind of oxide-based spintronics.
NASA Astrophysics Data System (ADS)
Fujii, Hidemichi; Okamoto, Shunsuke; Kagawa, Shigemi; Managi, Shunsuke
2017-12-01
This study investigated the changes in the toxicity of chemical emissions from the US industrial sector over the 1998-2009 period. Specifically, we employed a multiregional input-output analysis framework and integrated a supply-side index decomposition analysis (IDA) with a demand-side structural decomposition analysis (SDA) to clarify the main drivers of changes in the toxicity of production- and consumption-based chemical emissions. The results showed that toxic emissions from the US industrial sector decreased by 83% over the studied period because of pollution abatement efforts adopted by US industries. A variety of pollution abatement efforts were used by different industries, and cleaner production in the mining sector and the use of alternative materials in the manufacture of transportation equipment represented the most important efforts.
A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects
VanderWeele, Tyler J.
2013-01-01
Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283
Stokes, Kathryn L; Forbes, Shari L; Tibbett, Mark
2013-05-01
Taphonomic studies regularly employ animal analogues for human decomposition due to ethical restrictions relating to the use of human tissue. However, the validity of using animal analogues in soil decomposition studies is still questioned. This study compared the decomposition of skeletal muscle tissues (SMTs) from human (Homo sapiens), pork (Sus scrofa), beef (Bos taurus), and lamb (Ovis aries) interred in soil microcosms. Fixed interval samples were collected from the SMT for microbial activity and mass tissue loss determination; samples were also taken from the underlying soil for pH, electrical conductivity, and nutrient (potassium, phosphate, ammonium, and nitrate) analysis. The overall patterns of nutrient fluxes and chemical changes in nonhuman SMT and the underlying soil followed that of human SMT. Ovine tissue was the most similar to human tissue in many of the measured parameters. Although no single analogue was a precise predictor of human decomposition in soil, all models offered close approximations in decomposition dynamics. © 2013 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability
NASA Technical Reports Server (NTRS)
Xu, Kun
1999-01-01
In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
2018-06-01
decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound
On the time-splitting scheme used in the Princeton Ocean Model
NASA Astrophysics Data System (ADS)
Kamenkovich, V. M.; Nechaev, D. A.
2009-05-01
The analysis of the time-splitting procedure implemented in the Princeton Ocean Model (POM) is presented. The time-splitting procedure uses different time steps to describe the evolution of interacting fast and slow propagating modes. In the general case the exact separation of the fast and slow modes is not possible. The main idea of the analyzed procedure is to split the system of primitive equations into two systems of equations for interacting external and internal modes. By definition, the internal mode varies slowly and the crux of the problem is to determine the proper filter, which excludes the fast component of the external mode variables in the relevant equations. The objective of this paper is to examine properties of the POM time-splitting procedure applied to equations governing the simplest linear non-rotating two-layer model of constant depth. The simplicity of the model makes it possible to study these properties analytically. First, the time-split system of differential equations is examined for two types of the determination of the slow component based on an asymptotic approach or time-averaging. Second, the differential-difference scheme is developed and some criteria of its stability are discussed for centered, forward, or backward time-averaging of the external mode variables. Finally, the stability of the POM time-splitting schemes with centered and forward time-averaging is analyzed. The effect of the Asselin filter on solutions of the considered schemes is studied. It is assumed that questions arising in the analysis of the simplest model are inherent in the general model as well.
NASA Astrophysics Data System (ADS)
Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.
2018-01-01
In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.
NASA Astrophysics Data System (ADS)
Bonnin, Mickaël; Chevrot, Sébastien; Gaudot, Ianis; Haugmard, Méric
2017-08-01
We performed shear wave splitting analysis on 203 permanent (French RLPB, CEA and Catalonian networks) and temporary (PyrOPE and IberArray experiments) broad-band stations around the Pyrenees. These measurements considerably enhance the spatial resolution and coverage of seismic anisotropy in that region. In particular, we characterize with different shear wave splitting analysis methods the small-scale variations of splitting parameters ϕ and δt along three dense transects crossing the western and central Pyrenees with an interstation spacing of about 7 km. While we find a relatively coherent seismic anisotropy pattern in the Pyrenean domain, we observe abrupt changes of splitting parameters in the Aquitaine Basin and delay times along the Pyrenees. We moreover observe coherent fast directions despite complex lithospheric structures in Iberia and the Massif Central. This suggests that two main sources of anisotropy are required to interpret seismic anisotropy in this region: (i) lithospheric fabrics in the Aquitaine Basin (probably frozen-in Hercynian anisotropy) and in the Pyrenees (early and late Pyrenean dynamics); (ii) asthenospheric mantle flow beneath the entire region (imprint of the western Mediterranean dynamics since the Oligocene).
NASA Astrophysics Data System (ADS)
Bonnin, M. J. A.; Chevrot, S.; Gaudot, I.; Haugmard, M.
2017-12-01
We performed shear wave splitting analysis on 203 permanent (French RLPB, CEA and Catalonian networks) and temporary (PYROPE and IberArray experiments) broad-band stations around the Pyrenees. These measurements considerably enhance the spatial resolution and coverage of seismic anisotropy in that region. In particular, we characterize with different shear wave splitting analysis methods the small-scale variations of splitting parameters φ and δt along three dense transects crossing the western and central Pyrenees with an interstation spacing of about 7 km. While we find a relatively coherent seismic anisotropy pattern in the Pyrenean domain, we observe abrupt changes of splitting parameters in the Aquitaine Basin and delay times along the Pyrenees. We moreover observe coherent fast directions despite complex lithospheric structures in Iberia and the Massif Central. This suggests that two main sources of anisotropy are required to interpret seismic anisotropy in this region: (i) lithospheric fabrics in the Aquitaine Basin (probably frozen-in Hercynian anisotropy) and in the Pyrenees (early and late Pyrenean dynamics); (ii) asthenospheric mantle flow beneath the entire region (imprint of the western Mediterranean dynamics since the Oligocene).
Shear-wave splitting and moonquakes
NASA Astrophysics Data System (ADS)
Dimech, J. L.; Weber, R. C.; Savage, M. K.
2017-12-01
Shear-wave splitting is a powerful tool for measuring anisotropy in the Earth's crust and mantle, and is sensitive to geological features such as fluid filled cracks, thin alternating layers of rock with different elastic properties, and preferred mineral orientations caused by strain. Since a shear wave splitting measurement requires only a single 3-component seismic station, it has potential applications for future single-station planetary seismic missions, such as the InSight geophysical mission to Mars, as well as possible future missions to Europa and the Moon. Here we present a preliminary shear-wave splitting analysis of moonquakes detected by the Apollo Passive Seismic Experiment. Lunar seismic data suffers from several drawbacks compared to modern terrestrial data, including severe seismic scattering, low intrinsic attenuation, 10-bit data resolution, thermal spikes, and timing errors. Despite these drawbacks, we show that it is in principle possible to make a shear wave splitting measurement using the S-phase arrival of a relatively high-quality moonquake, as determined by several agreeing measurement criteria. Encouraged by this finding, we further extend our analysis to clusters of "deep moonquake" events by stacking multiple events from the same cluster together to further enhance the quality of the S-phase arrivals that the measurement is based on.
NASA Astrophysics Data System (ADS)
Voit, E. I.; Didenko, N. A.; Gaivoronskaya, K. A.
2018-03-01
Thermal decomposition of (NH4)2ZrF6 resulting in ZrO2 formation within the temperature range of 20°-750°C has been investigated by means of thermal and X-ray diffraction analysis and IR and Raman spectroscopy. It has been established that thermolysis proceeds in six stages. The vibrational-spectroscopy data for the intermediate products of thermal decomposition have been obtained, systematized, and summarized.
Arvand, Mardjan; Feil, Edward J.; Giladi, Michael; Boulouis, Henri-Jean; Viezens, Juliane
2007-01-01
Bartonella henselae is a zoonotic pathogen and the causative agent of cat scratch disease and a variety of other disease manifestations in humans. Previous investigations have suggested that a limited subset of B. henselae isolates may be associated with human disease. In the present study, 182 human and feline B. henselae isolates from Europe, North America and Australia were analysed by multi-locus sequence typing (MLST) to detect any associations between sequence type (ST), host species and geographical distribution of the isolates. A total of 14 sequence types were detected, but over 66% (16/24) of the isolates recovered from human disease corresponded to a single genotype, ST1, and this type was detected in all three continents. In contrast, 27.2% (43/158) of the feline isolates corresponded to ST7, but this ST was not recovered from humans and was restricted to Europe. The difference in host association of STs 1 (human) and 7 (feline) was statistically significant (P≤0.001). eBURST analysis assigned the 14 STs to three clonal lineages, which contained two or more STs, and a singleton comprising ST7. These groups were broadly consistent with a neighbour-joining tree, although splits decomposition analysis was indicative of a history of recombination. These data indicate that B. henselae lineages differ in their virulence properties for humans and contribute to a better understanding of the population structure of B. henselae. PMID:18094753
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Castada, Hardy Z; Wick, Cheryl; Taylor, Kaitlyn; Harper, W James
2014-04-01
Splits/cracks are recurring product defects that negatively affect the Swiss cheese industry. Investigations to understand the biophysicochemical aspects of these defects, and thus determine preventive measures against their occurrence, are underway. In this study, selected-ion, flow tube mass spectrometry was employed to determine the volatile organic compound (VOC) profiles present in the headspace of split compared with nonsplit cheeses. Two sampling methodologies were employed: split compared with nonsplit cheese vat pair blocks; and comparison of blind, eye, and split segments within cheese blocks. The variability in VOC profiles was examined to evaluate the potential biochemical pathway chemistry differences within and between cheese samples. VOC profile inhomogeneity was most evident in cheeses between factories. Evaluation of biochemical pathways leading to the formation of key VOCs differentiating the split from the blind and eye segments within factories indicated release of additional carbon dioxide by-product. These results suggest a factory-dependent cause of split formation that could develop from varied fermentation pathways in the blind, eye, and split areas within a cheese block. The variability of VOC profiles within and between factories exhibit varied biochemical fermentation pathways that could conceivably be traced back in the making process to identify parameters responsible for split defect. © 2014 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Liberatore, Raffaele; Lanchi, Michela; Turchetti, Luca
2016-05-01
The Hybrid Sulfur (HyS) is a water splitting process for hydrogen production powered with high temperature nuclear heat and electric power; among the numerous thermo-chemical and thermo-electro-chemical cycles proposed in the literature, such cycle is considered to have a particularly high potential also if powered by renewable energy. SOL2HY2 (Solar to Hydrogen Hybrid Cycles) is a 3 year research project, co-funded by the Fuel Cells and Hydrogen Joint Undertaking (FCH JU). A significant part of the project activities are devoted to the analysis and optimization of the integration of the solar power plant with the chemical, hydrogen production plant. This work reports a part of the results obtained in such research activity. The analysis presented in this work builds on previous process simulations used to determine the energy requirements of the hydrogen production plant in terms of electric power, medium (<550°C) and high (>550°C) temperature heat. For the supply of medium temperature (MT) heat, a parabolic trough CSP plant using molten salts as heat transfer and storage medium is considered. A central receiver CSP (Concentrated Solar Power) plant is considered to provide high temperature (HT) heat, which is only needed for sulfuric acid decomposition. Finally, electric power is provided by a power block included in the MT solar plant and/or drawn from the grid, depending on the scenario considered. In particular, the analysis presented here focuses on the medium temperature CSP plant, possibly combined with a power block. Different scenarios were analysed by considering plants with different combinations of geographical location and sizing criteria.
Three-dimensional multigrid algorithms for the flux-split Euler equations
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Thomas, James L.; Whitfield, David L.
1988-01-01
The Full Approximation Scheme (FAS) multigrid method is applied to several implicit flux-split algorithms for solving the three-dimensional Euler equations in a body fitted coordinate system. Each of the splitting algorithms uses a variation of approximate factorization and is implemented in a finite volume formulation. The algorithms are all vectorizable with little or no scalar computation required. The flux vectors are split into upwind components using both the splittings of Steger-Warming and Van Leer. The stability and smoothing rate of each of the schemes are examined using a Fourier analysis of the complete system of equations. Results are presented for three-dimensional subsonic, transonic, and supersonic flows which demonstrate substantially improved convergence rates with the multigrid algorithm. The influence of using both a V-cycle and a W-cycle on the convergence is examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.
Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. Themore » degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.« less
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
NASA Astrophysics Data System (ADS)
Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.
2017-06-01
We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.
NASA Astrophysics Data System (ADS)
Gu, Rongbao; Shao, Yanmin
2016-07-01
In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.
The Use of Decompositions in International Trade Textbooks.
ERIC Educational Resources Information Center
Highfill, Jannett K.; Weber, William V.
1994-01-01
Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)
Effect of pre-heating on the thermal decomposition kinetics of cotton
USDA-ARS?s Scientific Manuscript database
The effect of pre-heating at low temperatures (160-280°C) on the thermal decomposition kinetics of scoured cotton fabrics was investigated by thermogravimetric analysis under nonisothermal conditions. Isoconversional methods were used to calculate the activation energies for the pyrolysis after one-...
Split torque transmission load sharing
NASA Technical Reports Server (NTRS)
Krantz, T. L.; Rashidi, M.; Kish, J. G.
1992-01-01
Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.
NASA Astrophysics Data System (ADS)
Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.
1990-07-01
Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.
Benner, R.; Hatcher, P.G.; Hedges, J.I.
1990-01-01
Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
He, Y.; Zhuang, Q.; Harden, Jennifer W.; McGuire, A. David; Fan, Z.; Liu, Y.; Wickland, Kimberly P.
2014-01-01
The large amount of soil carbon in boreal forest ecosystems has the potential to influence the climate system if released in large quantities in response to warming. Thus, there is a need to better understand and represent the environmental sensitivity of soil carbon decomposition. Most soil carbon decomposition models rely on empirical relationships omitting key biogeochemical mechanisms and their response to climate change is highly uncertain. In this study, we developed a multi-layer microbial explicit soil decomposition model framework for boreal forest ecosystems. A thorough sensitivity analysis was conducted to identify dominating biogeochemical processes and to highlight structural limitations. Our results indicate that substrate availability (limited by soil water diffusion and substrate quality) is likely to be a major constraint on soil decomposition in the fibrous horizon (40–60% of soil organic carbon (SOC) pool size variation), while energy limited microbial activity in the amorphous horizon exerts a predominant control on soil decomposition (>70% of SOC pool size variation). Elevated temperature alleviated the energy constraint of microbial activity most notably in amorphous soils, whereas moisture only exhibited a marginal effect on dissolved substrate supply and microbial activity. Our study highlights the different decomposition properties and underlying mechanisms of soil dynamics between fibrous and amorphous soil horizons. Soil decomposition models should consider explicitly representing different boreal soil horizons and soil–microbial interactions to better characterize biogeochemical processes in boreal forest ecosystems. A more comprehensive representation of critical biogeochemical mechanisms of soil moisture effects may be required to improve the performance of the soil model we analyzed in this study.
Lossless and Sufficient - Invariant Decomposition of Deterministic Target
NASA Astrophysics Data System (ADS)
Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio
2011-03-01
The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.
Nitrated graphene oxide and its catalytic activity in thermal decomposition of ammonium perchlorate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wenwen; Luo, Qingping; Duan, Xiaohui
2014-02-01
Highlights: • The NGO was synthesized by nitrifying homemade GO. • The N content of resulted NGO is up to 1.45 wt.%. • The NGO can facilitate the decomposition of AP and release much heat. - Abstract: Nitrated graphene oxide (NGO) was synthesized by nitrifying homemade GO with nitro-sulfuric acid. Fourier transform infrared spectroscopy (FTIR), laser Raman spectroscopy, CP/MAS {sup 13}C NMR spectra and X-ray photoelectron spectroscopy (XPS) were used to characterize the structure of NGO. The thickness and the compositions of GO and NGO were analyzed by atomic force microscopy (AFM) and elemental analysis (EA), respectively. The catalytic effectmore » of the NGO for the thermal decomposition of ammonium perchlorate (AP) was investigated by differential scanning calorimetry (DSC). Adding 10% of NGO to AP decreases the decomposition temperature by 106 °C and increases the apparent decomposition heat from 875 to 3236 J/g.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browning, Katie L; Baggetto, Loic; Unocic, Raymond R
This work reports a method to explore the catalytic reactivity of electrode surfaces towards the decomposition of carbonate solvents [ethylene carbonate (EC), dimethyl carbonate (DMC), and EC/DMC]. We show that the decomposition of a 1:1 wt% EC/DMC mixture is accelerated over certain commercially available LiCoO2 materials resulting in the formation of CO2 while over pure EC or DMC the reaction is much slower or negligible. The solubility of the produced CO2 in carbonate solvents is high (0.025 grams/mL) which masks the effect of electrolyte decomposition during storage or use. The origin of this decomposition is not clear but it ismore » expected to be present on other cathode materials and may affect the analysis of SEI products as well as the safety of Li-ion batteries.« less
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation
NASA Astrophysics Data System (ADS)
Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin
2018-05-01
Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.
Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-11-16
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.
Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-01-01
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968
Metagenomic analysis of antibiotic resistance genes (ARGs) during refuse decomposition.
Liu, Xi; Yang, Shu; Wang, Yangqing; Zhao, He-Ping; Song, Liyan
2018-04-12
Landfill is important reservoirs of residual antibiotics and antibiotic resistance genes (ARGs), but the mechanism of landfill application influence on antibiotic resistance remains unclear. Although refuse decomposition plays a crucial role in landfill stabilization, its impact on the antibiotic resistance has not been well characterized. To better understand the impact, we studied the dynamics of ARGs and the bacterial community composition during refuse decomposition in a bench-scale bioreactor after long term operation (265d) based on metagenomics analysis. The total abundances of ARGs increased from 431.0ppm in the initial aerobic phase (AP) to 643.9ppm in the later methanogenic phase (MP) during refuse decomposition, suggesting that application of landfill for municipal solid waste (MSW) treatment may elevate the level of ARGs. A shift from drug-specific (bacitracin, tetracycline and sulfonamide) resistance to multidrug resistance was observed during the refuse decomposition and was driven by a shift of potential bacteria hosts. The elevated abundance of Pseudomonas mainly contributed to the increasing abundance of multidrug ARGs (mexF and mexW). Accordingly, the percentage of ARGs encoding an efflux pump increased during refuse decomposition, suggesting that potential bacteria hosts developed this mechanism to adapt to the carbon and energy shortage when biodegradable substances were depleted. Overall, our findings indicate that the use of landfill for MSW treatment increased antibiotic resistance, and demonstrate the need for a comprehensive investigation of antibiotic resistance in landfill. Copyright © 2018. Published by Elsevier B.V.
Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S
2015-01-01
Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xuerun, E-mail: xuerunli@163.com; Zhang, Yu; Shen, Xiaodong, E-mail: xdshen@njut.edu.cn
The formation kinetics of tricalcium aluminate (C{sub 3}A) and calcium sulfate yielding calcium sulfoaluminate (C{sub 4}A{sub 3}more » $$) and the decomposition kinetics of calcium sulfoaluminate were investigated by sintering a mixture of synthetic C{sub 3}A and gypsum. The quantitative analysis of the phase composition was performed by X-ray powder diffraction analysis using the Rietveld method. The results showed that the formation reaction 3Ca{sub 3}Al{sub 2}O{sub 6} + CaSO{sub 4} → Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 6CaO was the primary reaction < 1350 °C with and activation energy of 231 ± 42 kJ/mol; while the decomposition reaction 2Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 10CaO → 6Ca{sub 3}Al{sub 2}O{sub 6} + 2SO{sub 2} ↑ + O{sub 2} ↑ primarily occurred beyond 1350 °C with an activation energy of 792 ± 64 kJ/mol. The optimal formation region for C{sub 4}A{sub 3}$$ was from 1150 °C to 1350 °C and from 6 h to 1 h, which could provide useful information on the formation of C{sub 4}A{sub 3}$ containing clinkers. The Jander diffusion model was feasible for the formation and decomposition of calcium sulfoaluminate. Ca{sup 2+} and SO{sub 4}{sup 2−} were the diffusive species in both the formation and decomposition reactions. -- Highlights: •Formation and decomposition of calcium sulphoaluminate were studied. •Decomposition of calcium sulphoaluminate combined CaO and yielded C{sub 3}A. •Activation energy for formation was 231 ± 42 kJ/mol. •Activation energy for decomposition was 792 ± 64 kJ/mol. •Both the formation and decomposition were controlled by diffusion.« less
Cockle, Diane Lyn; Bell, Lynne S
2017-03-01
Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
DOT National Transportation Integrated Search
2017-04-04
This paper employs the finite element (FE) modeling : method to investigate the contributing factors to the horizontal : splitting cracks observed in the upper strand plane in some : concrete crossties made with seven-wire strands. The concrete...
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
Liu, Limei; Sanchez-Lopez, Hector; Poole, Michael; Liu, Feng; Crozier, Stuart
2012-09-01
Splitting a magnetic resonance imaging (MRI) magnet into two halves can provide a central region to accommodate other modalities, such as positron emission tomography (PET). This approach, however, produces challenges in the design of the gradient coils in terms of gradient performance and fabrication. In this paper, the impact of a central gap in a split MRI system was theoretically studied by analysing the performance of split, actively-shielded transverse gradient coils. In addition, the effects of the eddy currents induced in the cryostat on power loss, mechanical vibration and magnetic field harmonics were also investigated. It was found, as expected, that the gradient performance tended to decrease as the central gap increased. Furthermore, the effects of the eddy currents were heightened as a consequence of splitting the gradient assembly into two halves. An optimal central gap size was found, such that the split gradient coils designed with this central gap size could produce an engineering solution with an acceptable trade-off between gradient performance and eddy current effects. These investigations provide useful information on the inherent trade-offs in hybrid MRI imaging systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Dernotte, Jeremie; Dec, John E.; Ji, Chunsheng
2015-04-14
A detailed understanding of the various factors affecting the trends in gross-indicated thermal efficiency with changes in key operating parameters has been carried out, applied to a one-liter displacement single-cylinder boosted Low-Temperature Gasoline Combustion (LTGC) engine. This work systematically investigates how the supplied fuel energy splits into the following four energy pathways: gross-indicated thermal efficiency, combustion inefficiency, heat transfer and exhaust losses, and how this split changes with operating conditions. Additional analysis is performed to determine the influence of variations in the ratio of specific heat capacities (γ) and the effective expansion ratio, related to the combustion-phasing retard (CA50), onmore » the energy split. Heat transfer and exhaust losses are computed using multiple standard cycle analysis techniques. Furthermore, the various methods are evaluated in order to validate the trends.« less
Lee, Gileung; Lee, Kang-Ie; Lee, Yunjoo; Kim, Backki; Lee, Dongryung; Seo, Jeonghwan; Jang, Su; Chin, Joong Hyoun; Koh, Hee-Jong
2018-07-01
The split-hull phenotype caused by reduced lemma width and low lignin content is under control of SPH encoding a type-2 13-lipoxygenase and contributes to high dehulling efficiency. Rice hulls consist of two bract-like structures, the lemma and palea. The hull is an important organ that helps to protect seeds from environmental stress, determines seed shape, and ensures grain filling. Achieving optimal hull size and morphology is beneficial for seed development. We characterized the split-hull (sph) mutant in rice, which exhibits hull splitting in the interlocking part between lemma and palea and/or the folded part of the lemma during the grain filling stage. Morphological and chemical analysis revealed that reduction in the width of the lemma and lignin content of the hull in the sph mutant might be the cause of hull splitting. Genetic analysis indicated that the mutant phenotype was controlled by a single recessive gene, sph (Os04g0447100), which encodes a type-2 13-lipoxygenase. SPH knockout and knockdown transgenic plants displayed the same split-hull phenotype as in the mutant. The sph mutant showed significantly higher linoleic and linolenic acid (substrates of lipoxygenase) contents in spikelets compared to the wild type. It is probably due to the genetic defect of SPH and subsequent decrease in lipoxygenase activity. In dehulling experiment, the sph mutant showed high dehulling efficiency even by a weak tearing force in a dehulling machine. Collectively, the results provide a basis for understanding of the functional role of lipoxygenase in structure and maintenance of hulls, and would facilitate breeding of easy-dehulling rice.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Sorci, Mirco; Dassa, Bareket; Liu, Hongwei; Anand, Gaurav; Dutta, Amit K; Pietrokovski, Shmuel; Belfort, Marlene; Belfort, Georges
2013-06-18
In order to measure the intermolecular binding forces between two halves (or partners) of naturally split protein splicing elements called inteins, a novel thiol-hydrazide linker was designed and used to orient immobilized antibodies specific for each partner. Activation of the surfaces was achieved in one step, allowing direct intermolecular force measurement of the binding of the two partners of the split intein (called protein trans-splicing). Through this binding process, a whole functional intein is formed resulting in subsequent splicing. Atomic force microscopy (AFM) was used to directly measure the split intein partner binding at 1 μm/s between native (wild-type) and mixed pairs of C- and N-terminal partners of naturally occurring split inteins from three cyanobacteria. Native and mixed pairs exhibit similar binding forces within the error of the measurement technique (~52 pN). Bioinformatic sequence analysis and computational structural analysis discovered a zipper-like contact between the two partners with electrostatic and nonpolar attraction between multiple aligned ion pairs and hydrophobic residues. Also, we tested the Jarzynski's equality and demonstrated, as expected, that nonequilibrium dissipative measurements obtained here gave larger energies of interaction as compared with those for equilibrium. Hence, AFM coupled with our immobilization strategy and computational studies provides a useful analytical tool for the direct measurement of intermolecular association of split inteins and could be extended to any interacting protein pair.
Isothermal Decomposition of Hydrogen Peroxide Dihydrate
NASA Technical Reports Server (NTRS)
Loeffler, M. J.; Baragiola, R. A.
2011-01-01
We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.
HCOOH decomposition on Pt(111): A DFT study
Scaranto, Jessica; Mavrikakis, Manos
2015-10-13
Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less
HCOOH decomposition on Pt(111): A DFT study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaranto, Jessica; Mavrikakis, Manos
Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
Challenges of including nitrogen effects on decomposition in earth system models
NASA Astrophysics Data System (ADS)
Hobbie, S. E.
2011-12-01
Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.
Self-similar pyramidal structures and signal reconstruction
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Leon, Manuel; Saliani, Sandra
1998-03-01
Pyramidal structures are defined which are locally a combination of low and highpass filtering. The structures are analogous to but different from wavelet packet structures. In particular, new frequency decompositions are obtained; and these decompositions can be parameterized to establish a correspondence with a large class of Cantor sets. Further correspondences are then established to relate such frequency decompositions with more general self- similarities. The role of the filters in defining these pyramidal structures gives rise to signal reconstruction algorithms, and these, in turn, are used in the analysis of speech data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caballero, F.G.; Yen, Hung-Wei; Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006
2014-02-15
Interphase carbide precipitation due to austenite decomposition was investigated by high resolution transmission electron microscopy and atom probe tomography in tempered nanostructured bainitic steels. Results showed that cementite (θ) forms by a paraequilibrium transformation mechanism at the bainitic ferrite–austenite interface with a simultaneous three phase crystallographic orientation relationship. - Highlights: • Interphase carbide precipitation due to austenite decomposition • Tempered nanostructured bainitic steels • High resolution transmission electron microscopy and atom probe tomography • Paraequilibrium θ with three phase crystallographic orientation relationship.
A Quantitative Analysis of Children's Splitting Operations and Fraction Schemes
ERIC Educational Resources Information Center
Norton, Anderson; Wilkins, Jesse L. M.
2009-01-01
Teaching experiments with pairs of children have generated several hypotheses about students' construction of fractions. For example, Steffe (2004) hypothesized that robust conceptions of improper fractions depends on the development of a splitting operation. Results from teaching experiments that rely on scheme theory and Steffe's hierarchy of…
GASOLINE/DIESEL PM SPLIT STUDY: LIGHT-DUTY VEHICLE TESTING, DATA, AND ANALYSIS
During June 2001, the EPA participated in DOE's Gasoline/Diesel PM Split Study in Riverside, California. The purpose of the study was to determine the contribution of diesel versus gasoline-powered exhaust to the particulate matter (PM) inventory in the South Coast Air Basin. T...
Light distribution in diffractive multifocal optics and its optimization.
Portney, Valdemar
2011-11-01
To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Morais, Helena; Ramos, Cristina; Forgács, Esther; Cserháti, Tibor; Oliviera, José
2002-04-25
The effect of light, storage time and temperature on the decomposition rate of monomeric anthocyanin pigments extracted from skins of grape (Vitis vinifera var. Red globe) was determined by reversed-phase high-performance liquid chromatography (RP-HPLC). The impact of various storage conditions on the pigment stability was assessed by stepwise regression analysis. RP-HPLC separated well the five anthocyanins identified and proved the presence of other unidentified pigments at lower concentrations. Stepwise regression analysis confirmed that the overall decomposition rate of monomeric anthocyanins, peonidin-3-glucoside and malvidin-3-glucoside significantly depended on the time and temperature of storage, the effect of storage time being the most important. The presence or absence of light exerted a negligible impact on the decomposition rate.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
NASA Astrophysics Data System (ADS)
Lin, Yinwei
2018-06-01
A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.
Ruan, Chuanfen; Bai, Xuelian; Zhang, Miao; Zhu, Shuangshuang; Jiang, Yingying
2016-01-01
Endophytic microbe has been proved to be one of rich sources of bioactive natural products with potential application for new drug and pesticide discovery. One cyclodepsipeptide, beauvericin, was firstly isolated from the fermentation broth of Fusarium oxysporum 5-19 endophytic on Edgeworthia chrysantha Linn. Its chemical structure was unambiguously identified by a combination of spectroscopic methods, such as HRESI-MS and 1H and 13C NMR. ESI-MS/MS was successfully used to elucidate the splitting decomposition route of the positive molecule ion of beauvericin. Antimicrobial results showed that this cyclodepsipeptide had inhibitory effect on three human pathogenic microbes, Candida albicans, Escherichia coli, and Staphylococcus aureus. In particular, beauvericin exhibited the strongest antimicrobial activity against S. aureus with MIC values of 3.91 μM, which had similar effect with that of the positive control amoxicillin. PMID:27413733
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Surface-peaked medium effects in the interaction of nucleons with finite nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguayo, F. J.; Arellano, H. F.
We investigate the asymptotic separation of the optical model potential for nucleon-nucleus scattering in momentum space, where the potential is split into a medium-independent term and another depending exclusively on the gradient of the density-dependent g matrix. This decomposition confines the medium sensitivity of the nucleon-nucleus coupling to the surface of the nucleus. We examine this feature in the context of proton-nucleus scattering at beam energies between 30 and 100 MeV and find that the pn coupling accounts for most of this sensitivity. Additionally, based on this general structure of the optical potential we are able to treat both, themore » medium dependence of the effective interaction and the full mixed density as described by single-particle shell models. The calculated scattering observables agree within 10% with those obtained by Arellano, Brieva, and Love in their momentum-space g-folding approach.« less
Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir
2016-08-01
In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.
A Neutral Silicon/Phosphorus Frustrated Lewis Pair.
Waerder, Benedikt; Pieper, Martin; Körte, Leif A; Kinder, Timo A; Mix, Andreas; Neumann, Beate; Stammler, Hans-Georg; Mitzel, Norbert W
2015-11-02
Frustrated Lewis pairs (FLPs) have a great potential for activation of small molecules. Most known FLP systems are based on boron or aluminum atoms as acid functions, few on zinc, and only two on boron-isoelectronic silicenium cation systems. The first FLP system based on a neutral silane, (C2F5)3SiCH2P(tBu)2 (1), was prepared from (C2F5)3SiCl with C2F5 groups of very high electronegativity and LiCH2P(tBu)2. 1 is capable of cleaving hydrogen, and adds CO2 and SO2. Hydrogen splitting was confirmed by H/D scrambling reactions. The structures of 1, its CO2 and SO2 adducts, and a decomposition product with CO2 were elucidated by X-ray diffraction. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tsujimoto, Naoki; Saraya, Takeshi; Light, Richard W; Tsukahara, Yayoi; Koide, Takashi; Kurai, Daisuke; Ishii, Haruyuki; Kimura, Hirokazu; Goto, Hajime; Takizawa, Hajime
2015-01-01
Pleural separation, the "split pleura" sign, has been reported in patients with empyema. However, the diagnostic yield of the split pleura sign for complicated parapneumonic effusion (CPPE)/empyema and its utility for differentiating CPPE/empyema from parapneumonic effusion (PPE) remains unclear. This differentiation is important because CPPE/empyema patients need thoracic drainage. In this regard, the aim of this study was to develop a simple method to distinguish CPPE/empyema from PPE using computed tomography (CT) focusing on the split pleura sign, fluid attenuation values (HU: Hounsfield units), and amount of fluid collection measured on thoracic CT prior to diagnostic thoracentesis. A total of 83 consecutive patients who underwent chest CT and were diagnosed with CPPE (n=18)/empyema (n=18) or PPE (n=47) based on the diagnostic thoracentesis were retrospectively analyzed. On univariate analysis, the split pleura sign (odds ratio (OR), 12.1; p<0.001), total amount of pleural effusion (≥30 mm) (OR, 6.13; p<0.001), HU value≥10 (OR, 5.94; p=0.001), and the presence of septum (OR, 6.43; p=0.018), atelectasis (OR, 6.83; p=0.002), or air (OR, 9.90; p=0.002) in pleural fluid were significantly higher in the CPPE/empyema group than in the PPE group. On multivariate analysis, only the split pleura sign (hazard ratio (HR), 6.70; 95% confidence interval (CI), 1.91-23.5; p=0.003) and total amount of pleural effusion (≥30 mm) on thoracic CT (HR, 7.48; 95%CI, 1.76-31.8; p=0.006) were risk factors for empyema. Sensitivity, specificity, positive predictive value, and negative predictive value of the presence of both split pleura sign and total amount of pleural effusion (≥30 mm) on thoracic CT for CPPE/empyema were 79.4%, 80.9%, 75%, and 84.4%, respectively, with an area under the curve of 0.801 on receiver operating characteristic curve analysis. This study showed a high diagnostic yield of the split pleura sign and total amount of pleural fluid (≥30 mm) on thoracic CT that is useful and simple for discriminating between CPPE/empyema and PPE prior to diagnostic thoracentesis.
Tsujimoto, Naoki; Saraya, Takeshi; Light, Richard W.; Tsukahara, Yayoi; Koide, Takashi; Kurai, Daisuke; Ishii, Haruyuki; Kimura, Hirokazu; Goto, Hajime; Takizawa, Hajime
2015-01-01
Background Pleural separation, the “split pleura” sign, has been reported in patients with empyema. However, the diagnostic yield of the split pleura sign for complicated parapneumonic effusion (CPPE)/empyema and its utility for differentiating CPPE/empyema from parapneumonic effusion (PPE) remains unclear. This differentiation is important because CPPE/empyema patients need thoracic drainage. In this regard, the aim of this study was to develop a simple method to distinguish CPPE/empyema from PPE using computed tomography (CT) focusing on the split pleura sign, fluid attenuation values (HU: Hounsfield units), and amount of fluid collection measured on thoracic CT prior to diagnostic thoracentesis. Methods A total of 83 consecutive patients who underwent chest CT and were diagnosed with CPPE (n=18)/empyema (n=18) or PPE (n=47) based on the diagnostic thoracentesis were retrospectively analyzed. Results On univariate analysis, the split pleura sign (odds ratio (OR), 12.1; p<0.001), total amount of pleural effusion (≥30 mm) (OR, 6.13; p<0.001), HU value≥10 (OR, 5.94; p=0.001), and the presence of septum (OR, 6.43; p=0.018), atelectasis (OR, 6.83; p=0.002), or air (OR, 9.90; p=0.002) in pleural fluid were significantly higher in the CPPE/empyema group than in the PPE group. On multivariate analysis, only the split pleura sign (hazard ratio (HR), 6.70; 95% confidence interval (CI), 1.91-23.5; p=0.003) and total amount of pleural effusion (≥30 mm) on thoracic CT (HR, 7.48; 95%CI, 1.76-31.8; p=0.006) were risk factors for empyema. Sensitivity, specificity, positive predictive value, and negative predictive value of the presence of both split pleura sign and total amount of pleural effusion (≥30 mm) on thoracic CT for CPPE/empyema were 79.4%, 80.9%, 75%, and 84.4%, respectively, with an area under the curve of 0.801 on receiver operating characteristic curve analysis. Conclusion This study showed a high diagnostic yield of the split pleura sign and total amount of pleural fluid (≥30 mm) on thoracic CT that is useful and simple for discriminating between CPPE/empyema and PPE prior to diagnostic thoracentesis. PMID:26076488
The initial value problem as it relates to numerical relativity.
Tichy, Wolfgang
2017-02-01
Spacetime is foliated by spatial hypersurfaces in the 3+1 split of general relativity. The initial value problem then consists of specifying initial data for all fields on one such a spatial hypersurface, such that the subsequent evolution forward in time is fully determined. On each hypersurface the 3-metric and extrinsic curvature describe the geometry. Together with matter fields such as fluid velocity, energy density and rest mass density, the 3-metric and extrinsic curvature then constitute the initial data. There is a lot of freedom in choosing such initial data. This freedom corresponds to the physical state of the system at the initial time. At the same time the initial data have to satisfy the Hamiltonian and momentum constraint equations of general relativity and can thus not be chosen completely freely. We discuss the conformal transverse traceless and conformal thin sandwich decompositions that are commonly used in the construction of constraint satisfying initial data. These decompositions allow us to specify certain free data that describe the physical nature of the system. The remaining metric fields are then determined by solving elliptic equations derived from the constraint equations. We describe initial data for single black holes and single neutron stars, and how we can use conformal decompositions to construct initial data for binaries made up of black holes or neutron stars. Orbiting binaries will emit gravitational radiation and thus lose energy. Since the emitted radiation tends to circularize the orbits over time, one can thus expect that the objects in a typical binary move on almost circular orbits with slowly shrinking radii. This leads us to the concept of quasi-equilibrium, which essentially assumes that time derivatives are negligible in corotating coordinates for binaries on almost circular orbits. We review how quasi-equilibrium assumptions can be used to make physically well motivated approximations that simplify the elliptic equations we have to solve.
The initial value problem as it relates to numerical relativity
NASA Astrophysics Data System (ADS)
Tichy, Wolfgang
2017-02-01
Spacetime is foliated by spatial hypersurfaces in the 3+1 split of general relativity. The initial value problem then consists of specifying initial data for all fields on one such a spatial hypersurface, such that the subsequent evolution forward in time is fully determined. On each hypersurface the 3-metric and extrinsic curvature describe the geometry. Together with matter fields such as fluid velocity, energy density and rest mass density, the 3-metric and extrinsic curvature then constitute the initial data. There is a lot of freedom in choosing such initial data. This freedom corresponds to the physical state of the system at the initial time. At the same time the initial data have to satisfy the Hamiltonian and momentum constraint equations of general relativity and can thus not be chosen completely freely. We discuss the conformal transverse traceless and conformal thin sandwich decompositions that are commonly used in the construction of constraint satisfying initial data. These decompositions allow us to specify certain free data that describe the physical nature of the system. The remaining metric fields are then determined by solving elliptic equations derived from the constraint equations. We describe initial data for single black holes and single neutron stars, and how we can use conformal decompositions to construct initial data for binaries made up of black holes or neutron stars. Orbiting binaries will emit gravitational radiation and thus lose energy. Since the emitted radiation tends to circularize the orbits over time, one can thus expect that the objects in a typical binary move on almost circular orbits with slowly shrinking radii. This leads us to the concept of quasi-equilibrium, which essentially assumes that time derivatives are negligible in corotating coordinates for binaries on almost circular orbits. We review how quasi-equilibrium assumptions can be used to make physically well motivated approximations that simplify the elliptic equations we have to solve.
Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro
2016-01-25
We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.
Band splitting in Cd3As2 measured by magnetotransport
NASA Astrophysics Data System (ADS)
Desrat, W.; Krishtopenko, S. S.; Piot, B. A.; Orlita, M.; Consejo, C.; Ruffenach, S.; Knap, W.; Nateprov, A.; Arushanov, E.; Teppe, F.
2018-06-01
Magnetotransport measurements have been performed on (112)-oriented bulk Cd3As2 samples with in situ rotation at low temperature. The frequency analysis of the Shubnikov-de Haas oscillations reveals two weakly separated frequencies arising from two Fermi ellipsoids. The angle dependence of these frequencies is fitted by an analytical expression that we derived for any magnetic field orientation. It is based on an 8 ×8 k .p model which includes the spin-orbit coupling, the crystal field splitting due to tetragonal distortion, and the additional band splitting occurring in noncentrosymmetric crystals. This band splitting is evaluated to a finite value of 30 meV, demonstrating the absence of inversion symmetry in our Cd3As2 crystal.
NASA Astrophysics Data System (ADS)
Cheng, Way Lee; Han, Arum; Sadr, Reza
2016-11-01
Droplet splitting is the breakup of a parent droplet into two or more daughter droplets of desired sizes. It is done to improve production efficiency and investigational capacity in microfluidic devices. Passive splitting is the breakup of droplets into precise volume ratios at predetermined locations without external power sources. In this study, a 3-D simulation was conducted using the Volume-of-Fluid method to analysis the breakup process of a droplet in asymmetric T-junctions with different outlet arm lengths. The arrangement allows a droplet to be split into two smaller droplets of different sizes, where the volumetric ratio of the daughter droplets depends on the length ratios of the outlet arms. The study identified different breakup regimes such as primary, transition, bubble and non-breakup under different flow conditions and channel configurations. Furthermore, a close analysis to the primary breakup regimes were done to determine the breakup mechanisms at various flow conditions. The analysis show that the breakup mechanisms in asymmetric T-junctions is different than a regular split. A pseudo-phenomenological model for the breakup criteria was presented at the end. The model was an expanded version to a theoretically derived model for the symmetric droplet breakup. The Qatar National Research Fund (a member of the Qatar Founda- tion), under Grant NPRP 5-671-2-278, supported this work.
NASA Astrophysics Data System (ADS)
Bachmann, M.; Besse, P. A.; Melchior, H.
1995-10-01
Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.
Thermal Decomposition Model Development of EN-7 and EN-8 Polyurethane Elastomers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keedy, Ryan Michael; Harrison, Kale Warren; Cordaro, Joseph Gabriel
Thermogravimetric analysis - gas chromatography/mass spectrometry (TGA- GC/MS) experiments were performed on EN-7 and EN-8, analyzed, and reported in [1] . This SAND report derives and describes pyrolytic thermal decomposition models for use in predicting the responses of EN-7 and EN-8 in an abnormal thermal environment.
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-01-01
Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Odor analysis of decomposing buried human remains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vass, Arpad Alexander; Smith, Rob R; Thompson, Cyril V
2008-01-01
This study, conducted at the University of Tennessee's Anthropological Research Facility (ARF), lists and ranks the primary chemical constituents which define the odor of decomposition of human remains as detected at the soil surface of shallow burial sites. Triple sorbent traps were used to collect air samples in the field and revealed eight major classes of chemicals which now contain 478 specific volatile compounds associated with burial decomposition. Samples were analyzed using gas chromatography-mass spectrometry (GC-MS) and were collected below and above the body, and at the soil surface of 1.5-3.5 ft. (0.46-1.07 m) deep burial sites of four individualsmore » over a 4-year time span. New data were incorporated into the previously established Decompositional Odor Analysis (DOA) Database providing identification, chemical trends, and semi-quantitation of chemicals for evaluation. This research identifies the 'odor signatures' unique to the decomposition of buried human remains with projected ramifications on human remains detection canine training procedures and in the development of field portable analytical instruments which can be used to locate human remains in shallow burial sites.« less
Analysis of Microstrip Line Fed Patch Antenna for Wireless Communications
NASA Astrophysics Data System (ADS)
Singh, Ashish; Aneesh, Mohammad; Kamakshi; Ansari, J. A.
2017-11-01
In this paper, theoretical analysis of microstrip line fed rectangular patch antenna loaded with parasitic element and split-ring resonator is presented. The proposed antenna shows that the dualband operation depends on gap between parasitic element, split-ring resonator, length and width of microstrip line. It is found that antenna resonates at two distinct resonating modes i.e., 0.9 GHz and 1.8 GHz for lower and upper resonance frequencies respectively. The antenna shows dual frequency nature with frequency ratio 2.0. The characteristics of microstrip line fed rectangular patch antenna loaded with parasitic element and split-ring resonator antenna is compared with other prototype microstrip line fed antennas. Further, the theoretical results are compared with simulated and reported experimental results, they are in close agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dernotte, Jeremie; Dec, John E.; Ji, Chunsheng
A detailed understanding of the various factors affecting the trends in gross-indicated thermal efficiency with changes in key operating parameters has been carried out, applied to a one-liter displacement single-cylinder boosted Low-Temperature Gasoline Combustion (LTGC) engine. This work systematically investigates how the supplied fuel energy splits into the following four energy pathways: gross-indicated thermal efficiency, combustion inefficiency, heat transfer and exhaust losses, and how this split changes with operating conditions. Additional analysis is performed to determine the influence of variations in the ratio of specific heat capacities (γ) and the effective expansion ratio, related to the combustion-phasing retard (CA50), onmore » the energy split. Heat transfer and exhaust losses are computed using multiple standard cycle analysis techniques. Furthermore, the various methods are evaluated in order to validate the trends.« less
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Harmonic analysis of traction power supply system based on wavelet decomposition
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
Thermal decomposition of ammonium perchlorate in the presence of Al(OH)(3)·Cr(OH)(3) nanoparticles.
Zhang, WenJing; Li, Ping; Xu, HongBin; Sun, Randi; Qing, Penghui; Zhang, Yi
2014-03-15
An Al(OH)(3)·Cr(OH)(3) nanoparticle preparation procedure and its catalytic effect and mechanism on thermal decomposition of ammonium perchlorate (AP) were investigated using transmission electron microscopy (TEM), X-ray diffraction (XRD), thermogravimetric analysis and differential scanning calorimetry (TG-DSC), X-ray photoelectron spectroscopy (XPS), and thermogravimetric analysis and mass spectroscopy (TG-MS). In the preparation procedure, TEM, SAED, and FT-IR showed that the Al(OH)(3)·Cr(OH)(3) particles were amorphous particles with dimensions in the nanometer size regime containing a large amount of surface hydroxyl under the controllable preparation conditions. When the Al(OH)(3)·Cr(OH)(3) nanoparticles were used as additives for the thermal decomposition of AP, the TG-DSC results showed that the addition of Al(OH)(3)·Cr(OH)(3) nanoparticles to AP remarkably decreased the onset temperature of AP decomposition from approximately 450°C to 245°C. The FT-IR, RS and XPS results confirmed that the surface hydroxyl content of the Al(OH)(3)·Cr(OH)(3) nanoparticles decreased from 67.94% to 63.65%, and Al(OH)3·Cr(OH)3 nanoparticles were limitedly transformed from amorphous to crystalline after used as additives for the thermal decomposition of AP. Such behavior of Al(OH)(3)·Cr(OH)(3) nanoparticles promoted the oxidation of NH3 of AP to decompose to N2O first, as indicated by the TG-MS results, accelerating the AP thermal decomposition. Copyright © 2014 Elsevier B.V. All rights reserved.
Sponge-like silver obtained by decomposition of silver nitrate hexamethylenetetramine complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afanasiev, Pavel, E-mail: pavel.afanasiev@ircelyon.univ-lyon.fr
2016-07-15
Silver nitrate hexamethylenetetramine [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] coordination compound has been prepared via aqueous route and characterized by chemical analysis, XRD and electron microscopy. Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] under hydrogen and under inert has been studied by thermal analysis and mass spectrometry. Thermal decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] proceeds in the range 200–250 °C as a self-propagating rapid redox process accompanied with the release of multiple gases. The decomposition leads to formation of sponge-like silver having hierarchical open pore system with pore size spanning from 10 µm to 10 nm. The as-obtained silver spongesmore » exhibited favorable activity toward H{sub 2}O{sub 2} electrochemical reduction, making them potentially interesting as non-enzyme hydrogen peroxide sensors. - Graphical abstract: Thermal decomposition of silver nitrate hexamethylenetetramine coordination compound [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to sponge like silver that possesses open porous structure and demonstrates interesting properties as an electrochemical hydrogen peroxide sensor. Display Omitted - Highlights: • [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] orthorhombic phase prepared and characterized. • Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to metallic silver sponge with opened porosity. • Ag sponge showed promising properties as a material for hydrogen peroxide sensors.« less
NASA Astrophysics Data System (ADS)
Alkemade, R.; Van Rijswijk, P.
Large amounts of seaweed are deposited along the coast of Admiralty Bay, King George Island, Antarctica. The stranded seaweed partly decomposes on the beach and supports populations of meiofauna species, mostly nematodes. The factors determining the number of nematodes found in the seaweed packages were studied. Seaweed/sediment samples were collected from different locations, along the coast near Arctowski station, covering gradients of salinity, elevation and proximity of Penguin rookeries. On the same locations decomposition rate was determined by means of permeable containers with seaweed material. Models, including the relations between location, seaweed and sediment characteristics, number of nematodes and decomposition rates, were postulated and verified using path analysis. The most plausible and significant models are presented. The number of nematodes was directly correlated with the height of the location, the carbon-to-nitrogen ratio, and the salinity of the sample. Nematode numbers were apparently indirectly dependent on sediment composition and water content. We hypothesize that the different influences of melt water and tidal water, which affect both salinity and water content of the deposits, are important phenomena underlying these results. Analysis of the relation between decomposition rate and abiotic, location-related characteristics showed that decomposition rate was dependent on the water content of the stranded seaweed and sediment composition. Decomposition rates were high on locations where water content of the deposits was high. There the running water from melt water run-off or from the surf probably increased weight losses of seaweed.
Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation
1987-12-01
residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter
Lott, Michael J; Howa, John D; Chesson, Lesley A; Ehleringer, James R
2015-08-15
Elemental analyzer systems generate N(2) and CO(2) for elemental composition and isotope ratio measurements. As quantitative conversion of nitrogen in some materials (i.e., nitrate salts and nitro-organic compounds) is difficult, this study tests a recently published method - thermal decomposition without the addition of O(2) - for the analysis of these materials. Elemental analyzer/isotope ratio mass spectrometry (EA/IRMS) was used to compare the traditional combustion method (CM) and the thermal decomposition method (TDM), where additional O(2) is eliminated from the reaction. The comparisons used organic and inorganic materials with oxidized and/or reduced nitrogen and included ureas, nitrate salts, ammonium sulfate, nitro esters, and nitramines. Previous TDM applications were limited to nitrate salts and ammonium sulfate. The measurement precision and accuracy were compared to determine the effectiveness of converting materials containing different fractions of oxidized nitrogen into N(2). The δ(13) C(VPDB) values were not meaningfully different when measured via CM or TDM, allowing for the analysis of multiple elements in one sample. For materials containing oxidized nitrogen, (15) N measurements made using thermal decomposition were more precise than those made using combustion. The precision was similar between the methods for materials containing reduced nitrogen. The %N values were closer to theoretical when measured by TDM than by CM. The δ(15) N(AIR) values of purchased nitrate salts and ureas were nearer to the known values when analyzed using thermal decomposition than using combustion. The thermal decomposition method addresses insufficient recovery of nitrogen during elemental analysis in a variety of organic and inorganic materials. Its implementation requires relatively few changes to the elemental analyzer. Using TDM, it is possible to directly calibrate certain organic materials to international nitrate isotope reference materials without off-line preparation. Copyright © 2015 John Wiley & Sons, Ltd.
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Olejarczyk, Elzbieta; Bogucki, Piotr; Sobieszek, Aleksander
2017-01-01
Electroencephalographic (EEG) patterns were analyzed in a group of ambulatory patients who ranged in age and sex using spectral analysis as well as Directed Transfer Function, a method used to evaluate functional brain connectivity. We tested the impact of window size and choice of reference electrode on the identification of two or more peaks with close frequencies in the spectral power distribution, so called "split alpha." Together with the connectivity analysis, examination of spatiotemporal maps showing the distribution of amplitudes of EEG patterns allowed for better explanation of the mechanisms underlying the generation of split alpha peaks. It was demonstrated that the split alpha spectrum can be generated by two or more independent and interconnected alpha wave generators located in different regions of the cerebral cortex, but not necessarily in the occipital cortex. We also demonstrated the importance of appropriate reference electrode choice during signal recording. In addition, results obtained using the original data were compared with results obtained using re-referenced data, using average reference electrode and reference electrode standardization techniques.
NASA Astrophysics Data System (ADS)
Tiwari, Ashwani Kant; Bhushan, Kirti; Eken, Tuna; Singh, Arun
2018-06-01
New shear wave splitting measurements are obtained from the Bengal Basin using core-mantle refracted SKS, PKS, and SKKS phases. The splitting parameters, namely time delays (δ t) and fast polarization directions (ϕ), were estimated through analysis of 54 high-quality waveforms (⩾ 2.5 signal to noise ratio) from 30 earthquakes with magnitude ⩾ 5.5 recorded at ten seismic stations deployed over Bangladesh. No evidence of splitting was found, which indicates azimuthal isotropy beneath the region. These null measurements can be explained by either vertically dipping anisotropic fast axes or by the presence of multiple horizontal anisotropic layers with different fast polarization directions, where the combined effect results in a null characterization. The anisotropic fabric preserved from rifting episodes of Antarctica and India, subduction-related dynamics of the Indo-Burmese convergence zone, and northward movement of the Indian plate creating shear at the base of the lithosphere can explain the observed null measurements. The combined effect of all these most likely results in a strong vertical anisotropic heterogeneity, creating the observed null results.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Gaikwad, Ruchi; Ghorai, Soumajit; Amin, Sk Abdul; Adhikari, Nilanjan; Patel, Tarun; Das, Kalpataru; Jha, Tarun; Gayen, Shovanlal
2018-06-01
Breast cancer is one of the leading causes of cancers among the variety of cancers in woman all over the world. Compounds with phenylindole scaffold were found to execute promising cytotoxicity against breast cancer cell line MCF7. In the present study, a Monte Carlo based QSAR analysis was performed on a dataset containing 102 phenylindoles in order to accelerate the efforts to find out better cytotoxic phenylindoles against MCF7 cell line. The statistical qualities of the generated models were found to be quite good as far as the internal and external validation were concerned. The best models from each split (Split 1: R 2 = 0.6944, Q 2 = 0.6495; Split 2: R 2 = 0.8202, Q 2 = 0.7998; Split 3: R 2 = 0.8603, Q 2 = 0.8357) for the test set were selected and Y-scrambling test and applicability domain analysis were also performed to ensure the robustness of these models. Among these models, model from split 3 obtained by using hybrid descriptors (combination of SMILES and HSG with 0 ECk connectivity) was used to identify and classify the structural attributes as promoters as well as hinderers of cytotoxicity for these 2-phenylindole derivatives. Results from the analysis were further used to design and predict some probable new 2-phenylindole derivatives having promising cytotoxicity (IC 50 < 55 nM) against MCF7. Copyright © 2018 Elsevier Ltd. All rights reserved.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
Further insights into the kinetics of thermal decomposition during continuous cooling.
Liavitskaya, Tatsiana; Guigo, Nathanaël; Sbirrazzuoli, Nicolas; Vyazovkin, Sergey
2017-07-26
Following the previous work (Phys. Chem. Chem. Phys., 2016, 18, 32021), this study continues to investigate the intriguing phenomenon of thermal decomposition during continuous cooling. The phenomenon can be detected and its kinetics can be measured by means of thermogravimetric analysis (TGA). The kinetics of the thermal decomposition of ammonium nitrate (NH 4 NO 3 ), nickel oxalate (NiC 2 O 4 ), and lithium sulfate monohydrate (Li 2 SO 4 ·H 2 O) have been measured upon heating and cooling and analyzed by means of the isoconversional methodology. The results have confirmed the hypothesis that the respective kinetics should be similar for single-step processes (NH 4 NO 3 decomposition) but different for multi-step ones (NiC 2 O 4 decomposition and Li 2 SO 4 ·H 2 O dehydration). It has been discovered that the differences in the kinetics can be either quantitative or qualitative. Physical insights into the nature of the differences have been proposed.
3D quantitative analysis of early decomposition changes of the human face.
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
2018-03-01
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR
NASA Astrophysics Data System (ADS)
Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie
2014-01-01
The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Hsu, P C; Springer, H K
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less
Measurement of fracture toughness by nanoindentation methods: Recent advances and future challenges
Sebastiani, Marco; Johanns, K. E.; Herbert, Erik G.; ...
2015-04-30
In this study, we describe recent advances and developments for the measurement of fracture toughness at small scales by the use of nanoindentation-based methods including techniques based on micro-cantilever beam bending and micro-pillar splitting. A critical comparison of the techniques is made by testing a selected group of bulk and thin film materials. For pillar splitting, cohesive zone finite element simulations are used to validate a simple relationship between the critical load at failure, the pillar radius, and the fracture toughness for a range of material properties and coating/substrate combinations. The minimum pillar diameter required for nucleation and growth ofmore » a crack during indentation is also estimated. An analysis of pillar splitting for a film on a dissimilar substrate material shows that the critical load for splitting is relatively insensitive to the substrate compliance for a large range of material properties. Experimental results from a selected group of materials show good agreement between single cantilever and pillar splitting methods, while a discrepancy of ~25% is found between the pillar splitting technique and double-cantilever testing. It is concluded that both the micro-cantilever and pillar splitting techniques are valuable methods for micro-scale assessment of fracture toughness of brittle ceramics, provided the underlying assumptions can be validated. Although the pillar splitting method has some advantages because of the simplicity of sample preparation and testing, it is not applicable to most metals because their higher toughness prevents splitting, and in this case, micro-cantilever bend testing is preferred.« less
Destruction and guilt: splitting and reintegration in the analysis of a traumatised patient.
Henningsen, Franziska
2005-04-01
The author traces in detail how, in the analytic relationship, she was slowly able to read aspects of the trauma as 'quotations' and gradually, through transference, transform them into a symbolic language. Split-off aggression and guilt feelings became progressively accessible to interpretation through transferential projective identifications. During his analysis, the patient discovered he was the child of Nazi criminals: on his mother's side they were the third generation; on his father's side, the second.
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Stochiometry, Microbial community composition and decomposition, a modelling analysis
NASA Astrophysics Data System (ADS)
Berninger, Frank; Zhou, Xuan; Aaltonen, Heidi; Köster, Kajar; Heinonsalo, Jussi; Pumpanen, Jukka
2017-04-01
Enzyme activity based litter decomposition models describe the decomposition of soil organic matter as a function of microbial biomass and its activity. In these models, decomposition depends largely on microbial and litter stoïchiometry. We, used the model of Schimel and Weintraub (Soil Biology & Biochemistry 35 (2003) 549-563 largely relying on the modification of Waring B et al. Ecology Letters, (2013) 16: 887-894) and we modified the model to include bacteria, fungi and mycorrizal fungi as decomposer groups assuming different stochiometries. The model was tested against previously published data from a fire chronosequence from northern Finland. The model reconstructed well the development of soil organic matter, microbial biomasses, enzyme actitivies with time after fire. In a theoretical model analysis we tried to understand how the exchange of carbon and nitrogen between mycorrhiza and the plant as different litter stoïchiometries interact. The results indicate that if a high percentage of fungal N uptake is transferred to the plant mycorrhizal biomass will decrease drastically and does decrease, due to low mycorrhizal biomasses, the N uptake of plants. If a lower proportion of the fungal N uptake is transferred to the plant the N uptake of the plants is reasonable stable while the proportion of mycorrhiza of the total fungal biomass varies. The model is also able to simulate priming of soil organic matter decomposition.
Wang, Liqiong; Chen, Hongyan; Zhang, Tonglai; Zhang, Jianguo; Yang, Li
2007-08-17
Three different substituted potassium salts of trinitrophloroglucinol (H(3)TNPG) were prepared and characterized. The salts are all hydrates, and thermogravimetric analysis (TG) and elemental analysis confirmed that these salts contain crystal H2O and that the amount crystal H2O in potassium salts of H3TNPG is 1.0 hydrate for mono-substituted potassium salts of H3TNPG [K(H2TNPG)] and di-substituted potassium salt of H3TNPG [K2(HTNPG)], and 2.0 hydrate for tri-substituted potassium salt of H3TNPG [K3(TNPG)]. Their thermal decomposition mechanisms and kinetic parameters from 50 to 500 degrees C were studied under a linear heating rate by differential scanning calorimetry (DSC). Their thermal decomposition mechanisms undergo dehydration stage and intensive exothermic decomposition stage. FT-IR and TG studies verify that their final residua of decomposition are potassium cyanide or potassium carbonate. According to the onset temperature of the first exothermic decomposition process of dehydrated salts, the order of the thermal stability from low to high is from K(H2TNPG) and K2(HTNPG) to K3(TNPG), which is conform to the results of apparent activation energy calculated by Kissinger's and Ozawa-Doyle's method. Sensitivity test results showed that potassium salts of H3TNPG demonstrated higher sensitivity properties and had greater explosive probabilities.
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.
2018-05-01
The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.
Effect of Isomorphous Substitution on the Thermal Decomposition Mechanism of Hydrotalcites
Crosby, Sergio; Tran, Doanh; Cocke, David; Duraia, El-Shazly M.; Beall, Gary W.
2014-01-01
Hydrotalcites have many important applications in catalysis, wastewater treatment, gene delivery and polymer stabilization, all depending on preparation history and treatment scenarios. In catalysis and polymer stabilization, thermal decomposition is of great importance. Hydrotalcites form easily with atmospheric carbon dioxide and often interfere with the study of other anion containing systems, particularly if formed at room temperature. The dehydroxylation and decomposition of carbonate occurs simultaneously, making it difficult to distinguish the dehydroxylation mechanisms directly. To date, the majority of work on understanding the decomposition mechanism has utilized hydrotalcite precipitated at room temperature. In this study, evolved gas analysis combined with thermal analysis has been used to show that CO2 contamination is problematic in materials being formed at RT that are poorly crystalline. This has led to some dispute as to the nature of the dehydroxylation mechanism. In this paper, data for the thermal decomposition of the chloride form of hydrotalcite are reported. In addition, carbonate-free hydrotalcites have been synthesized with different charge densities and at different growth temperatures. This combination of parameters has allowed a better understanding of the mechanism of dehydroxylation and the role that isomorphous substitution plays in these mechanisms to be delineated. In addition, the effect of anion type on thermal stability is also reported. A stepwise dehydroxylation model is proposed that is mediated by the level of aluminum substitution. PMID:28788231
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
Quantifying the relative contribution of climate and human impacts on streamflow at seasonal scale
NASA Astrophysics Data System (ADS)
Xin, Z.; Zhang, L.; Li, Y.; Zhang, C.
2017-12-01
Both climate change and human activities have induced changes to hydrology. The quantification of their impacts on streamflow is a challenge, especially at the seasonal scale due to seasonality of climate and human impacts, i.e., water use for irrigation and water storage and release due to reservoir operation. In this study, the decomposition method based on the Budyko hypothesis is extended to the seasonal scale and is used to quantify the climate and human impacts on annual and seasonal streamflow changes. The results are further compared and verified with those simulated by the hydrological method of abcd model. Data are split into two periods (1953-1974 and 1975-2005) to quantify the change. Three seasons, including wet, dry and irrigation seasons are defined by introducing the monthly aridity index. In general, results showed a satisfactory agreement between the Budyko decomposition method and abcd model. Both climate change and human activities were found to induce a decrease in streamflow at the annual scale, with 67% of the change contributed by human activities. At the seasonal scale, the human-induced contribution to the reduced stream flow was 64% and 73% for dry and wet seasons, respectively; whereas in the irrigation season, the impact of human activities on reducing the streamflow was more pronounced (180%) since the climate contributes to increased streamflow. In addition, the quantification results were analyzed for each month in the wet season to reveal the effects of intense precipitation and reservoir operation rules during flood season.
Anisotropic Developments for Homogeneous Shear Flows
NASA Technical Reports Server (NTRS)
Cambon, Claude; Rubinstein, Robert
2006-01-01
The general decomposition of the spectral correlation tensor R(sub ij)(k) by Cambon et al. (J. Fluid Mech., 202, 295; J. Fluid Mech., 337, 303) into directional and polarization components is applied to the representation of R(sub ij)(k) by spherically averaged quantities. The decomposition splits the deviatoric part H(sub ij)(k) of the spherical average of R(sub ij)(k) into directional and polarization components H(sub ij)(sup e)(k) and H(sub ij)(sup z)(k). A self-consistent representation of the spectral tensor in the limit of weak anisotropy is constructed in terms of these spherically averaged quantities. The directional polarization components must be treated independently: models that attempt the same representation of the spectral tensor using the spherical average H(sub ij)(k) alone prove to be inconsistent with Navier-Stokes dynamics. In particular, a spectral tensor consistent with a prescribed Reynolds stress is not unique. The degree of anisotropy permitted by this theory is restricted by realizability requirements. Since these requirements will be less severe in a more accurate theory, a preliminary account is given of how to generalize the formalism of spherical averages to higher expansion of the spectral tensor. Directionality is described by a conventional expansion in spherical harmonics, but polarization requires an expansion in tensorial spherical harmonics generated by irreducible representations of the spatial rotation group SO(exp 3). These expansions are considered in more detail in the special case of axial symmetry.
Modal decomposition of turbulent supersonic cavity
NASA Astrophysics Data System (ADS)
Soni, R. K.; Arya, N.; De, A.
2018-06-01
Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.
Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino
2017-01-10
In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.
NASA Astrophysics Data System (ADS)
Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji
2018-04-01
We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Artifact removal from EEG data with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.
2017-03-01
In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Mostafanezhad, Isar; Boric-Lubecke, Olga; Lubecke, Victor; Mandic, Danilo P
2009-01-01
Empirical Mode Decomposition has been shown effective in the analysis of non-stationary and non-linear signals. As an application in wireless life signs monitoring in this paper we use this method in conditioning the signals obtained from the Doppler device. Random physical movements, fidgeting, of the human subject during a measurement can fall on the same frequency of the heart or respiration rate and interfere with the measurement. It will be shown how Empirical Mode Decomposition can break the radar signal down into its components and help separate and remove the fidgeting interference.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
NASA Technical Reports Server (NTRS)
Worstell, J. H.; Daniel, S. R.
1981-01-01
A method for the separation and analysis of tetralin hydroperoxide and its decomposition products by high pressure liquid chromatography has been developed. Elution with a single, mixed solvent from a micron-Porasil column was employed. Constant response factors (internal standard method) over large concentration ranges and reproducible retention parameters are reported.
Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries
ERIC Educational Resources Information Center
Nieto, Sandra; Ramos, Raúl
2015-01-01
This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Watterson, James H; Donohue, Joseph P
2011-09-01
Skeletal tissues (rat) were analyzed for ketamine (KET) and norketamine (NKET) following acute ketamine exposure (75 mg/kg i.p.) to examine the influence of bone type and decomposition period on drug levels. Following euthanasia, drug-free (n = 6) and drug-positive (n = 20) animals decomposed outdoors in rural Ontario for 0, 1, or 2 weeks. Skeletal remains were recovered and ground samples of various bones underwent passive methanolic extraction and analysis by GC-MS after solid-phase extraction. Drug levels, expressed as mass normalized response ratios, were compared across tissue types and decomposition periods. Bone type was a main effect (p < 0.05) for drug level and drug/metabolite level ratio (DMLR) for all decomposition times, except for DMLR after 2 weeks of decomposition. Mean drug level (KET and NKET) and DMLR varied by up to 23-fold, 18-fold, and 5-fold, respectively, between tissue types. Decomposition time was significantly related to DMLR, KET level, and NKET level in 3/7, 4/7, and 1/7 tissue types, respectively. Although substantial sitedependence may exist in measured bone drug levels, ratios of drug and metabolite levels should be investigated for utility in discrimination of drug administration patterns in forensic work.
Yuan, Jie; Zheng, Xiaofeng; Cheng, Fei; Zhu, Xian; Hou, Lin; Li, Jingxia; Zhang, Shuoxin
2017-10-24
Historically, intense forest hazards have resulted in an increase in the quantity of fallen wood in the Qinling Mountains. Fallen wood has a decisive influence on the nutrient cycling, carbon budget and ecosystem biodiversity of forests, and fungi are essential for the decomposition of fallen wood. Moreover, decaying dead wood alters fungal communities. The development of high-throughput sequencing methods has facilitated the ongoing investigation of relevant molecular forest ecosystems with a focus on fungal communities. In this study, fallen wood and its associated fungal communities were compared at different stages of decomposition to evaluate relative species abundance and species diversity. The physical and chemical factors that alter fungal communities were also compared by performing correspondence analysis according to host tree species across all stages of decomposition. Tree species were the major source of differences in fungal community diversity at all decomposition stages, and fungal communities achieved the highest levels of diversity at the intermediate and late decomposition stages. Interactions between various physical and chemical factors and fungal communities shared the same regulatory mechanisms, and there was no tree species-specific influence. Improving our knowledge of wood-inhabiting fungal communities is crucial for forest ecosystem conservation.
Nunes, F P; Garcia, Q S
2015-05-01
The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition process, no difficulties of convergence were observed in Olson model. So, this model can be used to describe decomposition curves in different types of environments, estimating K appropriately.
COMPADRE: an R and web resource for pathway activity analysis by component decompositions.
Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor
2012-10-15
The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.
NASA Astrophysics Data System (ADS)
Zhi, Y.; Yang, Z. F.; Yin, X. A.
2014-05-01
Decomposition analysis of water footprint (WF) changes, or assessing the changes in WF and identifying the contributions of factors leading to the changes, is important to water resource management. Instead of focusing on WF from the perspective of administrative regions, we built a framework in which the input-output (IO) model, the structural decomposition analysis (SDA) model and the generating regional IO tables (GRIT) method are combined to implement decomposition analysis for WF in a river basin. This framework is illustrated in the WF in Haihe River basin (HRB) from 2002 to 2007, which is a typical water-limited river basin. It shows that the total WF in the HRB increased from 4.3 × 1010 m3 in 2002 to 5.6 × 1010 m3 in 2007, and the agriculture sector makes the dominant contribution to the increase. Both the WF of domestic products (internal) and the WF of imported products (external) increased, and the proportion of external WF rose from 29.1 to 34.4%. The technological effect was the dominant contributor to offsetting the increase of WF. However, the growth of WF caused by the economic structural effect and the scale effect was greater, so the total WF increased. This study provides insights about water challenges in the HRB and proposes possible strategies for the future, and serves as a reference for WF management and policy-making in other water-limited river basins.
Decomposition rates and termite assemblage composition in semiarid Africa
Schuurman, G.
2005-01-01
Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
L'vov, Boris V.
2008-02-01
This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.
Splitting of turbulent spot in transitional pipe flow
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.
2017-11-01
Recent study (Wu et al., PNAS, 1509451112, 2015) demonstrated the feasibility and accuracy of direct computation of the Osborne Reynolds' pipe transition problem without the unphysical, axially periodic boundary condition. Here we use this approach to study the splitting of turbulent spot in transitional pipe flow, a feature first discovered by E.R. Lindgren (Arkiv Fysik 15, 1959). It has been widely believed that spot splitting is a mysterious stochastic process that has general implications on the lifetime and sustainability of wall turbulence. We address the following two questions: (1) What is the dynamics of turbulent spot splitting in pipe transition? Specifically, we look into any possible connection between the instantaneous strain rate field and the spot splitting. (2) How does the passive scalar field behave during the process of pipe spot splitting. In this study, the turbulent spot is introduced at the inlet plane through a sixty degree wide numerical wedge within which fully-developed turbulent profiles are assigned over a short time interval; and the simulation Reynolds numbers are 2400 for a 500 radii long pipe, and 2300 for a 1000 radii long pipe, respectively. Numerical dye is tagged on the imposed turbulent spot at the inlet. Splitting of the imposed turbulent spot is detected very easily. Preliminary analysis of the DNS results seems to suggest that turbulent spot slitting can be easily understood based on instantaneous strain rate field, and such spot splitting may not be relevant in external flows such as the flat-plate boundary layer.
Vitek, Wendy S.; Galárraga, Omar; Klatsky, Peter C.; Robins, Jared C.; Carson, Sandra A.; Blazar, Andrew S.
2015-01-01
Objective To determine the cost-effectiveness of split IVF-intracytoplasmic sperm injection (ICSI) for the treatment of couples with unexplained infertility. Design Adaptive decision model. Setting Academic infertility clinic. Patient(s) A total of 154 couples undergoing a split IVF-ICSI cycle and a computer-simulated cohort of women <35 years old with unexplained infertility undergoing IVF. Intervention(s) Modeling insemination method in the first IVF cycle as all IVF, split IVF-ICSI, or all ICSI, and adapting treatment based on fertilization outcomes. Main Outcome Measure(s) Live birth rate, incremental cost-effectiveness ratio (ICER). Result(s) In a single cycle, all IVF is preferred as the ICER of split IVF-ICSI or all ICSI ($58,766) does not justify the increased live birth rate (3%). If two cycles are needed, split IVF/ICSI is preferred as the increased cumulative live birth rate (3.3%) is gained at an ICER of $29,666. Conclusion(s) In a single cycle, all IVF was preferred as the increased live birth rate with split IVF-ICSI and all ICSI was not justified by the increased cost per live birth. If two IVF cycles are needed, however, split IVF/ICSI becomes the preferred approach, as a result of the higher cumulative live birth rate compared with all IVF and the lesser cost per live birth compared with all ICSI. PMID:23876534
Vitek, Wendy S; Galárraga, Omar; Klatsky, Peter C; Robins, Jared C; Carson, Sandra A; Blazar, Andrew S
2013-11-01
To determine the cost-effectiveness of split IVF-intracytoplasmic sperm injection (ICSI) for the treatment of couples with unexplained infertility. Adaptive decision model. Academic infertility clinic. A total of 154 couples undergoing a split IVF-ICSI cycle and a computer-simulated cohort of women <35 years old with unexplained infertility undergoing IVF. Modeling insemination method in the first IVF cycle as all IVF, split IVF-ICSI, or all ICSI, and adapting treatment based on fertilization outcomes. Live birth rate, incremental cost-effectiveness ratio (ICER). In a single cycle, all IVF is preferred as the ICER of split IVF-ICSI or all ICSI ($58,766) does not justify the increased live birth rate (3%). If two cycles are needed, split IVF/ICSI is preferred as the increased cumulative live birth rate (3.3%) is gained at an ICER of $29,666. In a single cycle, all IVF was preferred as the increased live birth rate with split IVF-ICSI and all ICSI was not justified by the increased cost per live birth. If two IVF cycles are needed, however, split IVF/ICSI becomes the preferred approach, as a result of the higher cumulative live birth rate compared with all IVF and the lesser cost per live birth compared with all ICSI. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Seismic anisotropy across the east African plateau from shear wave splitting analysis
NASA Astrophysics Data System (ADS)
Bagley, B. C.; Nyblade, A.; Mulibo, G.; Tugume, F.
2011-12-01
Previous studies of the east African plateau reveal complicated patterns of seismic anisotropy that are not easily explained by a single mechanism. The pattern is defined by rift-parallel fast directions for stations within or near Cenozoic rift valleys, and near-null results in Precambrian terrains away from the rift. Data from 65 temporary Africa Array stations deployed between 2007 and 2011 are being used to make new shear wave splitting measurements. The stations span the east African plateau and cover both the eastern and western branches of the east African rift system, as well as unrifted Proterozoic and Archean terrains in Uganda, Kenya, Tanzania, and Zambia. Through analysis of shear wave splitting we will better constrain the distribution of seismic anisotropy, and and from it gain new insight into the tectonic evolution of east Africa.
Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412
Mao, Lingai; Chen, Zhizong; Wu, Xinyue; Tang, Xiujuan; Yao, Shuiliang; Zhang, Xuming; Jiang, Boqiong; Han, Jingyi; Wu, Zuliang; Lu, Hao; Nozaki, Tomohiro
2018-04-05
A dielectric barrier discharge (DBD) catalyst hybrid reactor with CeO 2 /γ-Al 2 O 3 catalyst balls was investigated for benzene decomposition at atmospheric pressure and 30 °C. At an energy density of 37-40 J/L, benzene decomposition was as high as 92.5% when using the hybrid reactor with 5.0wt%CeO 2 /γ-Al 2 O 3 ; while it was 10%-20% when using a normal DBD reactor without a catalyst. Benzene decomposition using the hybrid reactor was almost the same as that using an O 3 catalyst reactor with the same CeO 2 /γ-Al 2 O 3 catalyst, indicating that O 3 plays a key role in the benzene decomposition. Fourier transform infrared spectroscopy analysis showed that O 3 adsorption on CeO 2 /γ-Al 2 O 3 promotes the production of adsorbed O 2 - and O 2 2‒ , which contribute benzene decomposition over heterogeneous catalysts. Nano particles as by-products (phenol and 1,4-benzoquinone) from benzene decomposition can be significantly reduced using the CeO 2 /γ-Al 2 O 3 catalyst. H 2 O inhibits benzene decomposition; however, it improves CO 2 selectivity. The deactivated CeO 2 /γ-Al 2 O 3 catalyst can be regenerated by performing discharges at 100 °C and 192-204 J/L. The decomposition mechanism of benzene over CeO 2 /γ-Al 2 O 3 catalyst was proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
Decomposition odour profiling in the air and soil surrounding vertebrate carrion.
Forbes, Shari L; Perrault, Katelynn A
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.
NASA Astrophysics Data System (ADS)
Williams, E. K.; Rosenheim, B. E.
2011-12-01
Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.
Reversible perspective and splitting in time.
Hart, Helen Schoenhals
2012-01-01
The element of time--the experience of it and the defensive use of it--is explored in conjunction with the use of reversible perspective as a psychotic defense. Clinical material from a long analysis illustrates how a psychotic patient used the reversible perspective, with its static splitting, to abolish the experience of time. When he improved and the reversible perspective became less effective for him, he replaced it with a more dynamic splitting mechanism using time gaps. With further improvement, the patient began to experience the passage of time, and along with it the excruciating pain of separation, envy, and loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Feng; Tominaga, Keisuke, E-mail: atmyh@ntu.edu.tw, E-mail: tominaga@kobe-u.ca.jp, E-mail: junichi.nishizawa@hanken.jp; Hayashi, Michitoshi, E-mail: atmyh@ntu.edu.tw, E-mail: tominaga@kobe-u.ca.jp, E-mail: junichi.nishizawa@hanken.jp
2014-05-07
The phonon modes of molecular crystals in the terahertz frequency region often feature delicately coupled inter- and intra-molecular vibrations. Recent advances in density functional theory such as DFT-D{sup *} have enabled accurate frequency calculation. However, the nature of normal modes has not been quantitatively discussed against experimental criteria such as isotope shift (IS) and correlation field splitting (CFS). Here, we report an analytical mode-decoupling method that allows for the decomposition of a normal mode of interest into intermolecular translation, libration, and intramolecular vibrational motions. We show an application of this method using the crystalline anthracene system as an example. Themore » relationship between the experimentally obtained IS and the IS obtained by PBE-D{sup *} simulation indicates that two distinctive regions exist. Region I is associated with a pure intermolecular translation, whereas region II features coupled intramolecular vibrations that are further coupled by a weak intermolecular translation. We find that the PBE-D{sup *} data show excellent agreement with the experimental data in terms of IS and CFS in region II; however, PBE-D{sup *} produces significant deviations in IS in region I where strong coupling between inter- and intra-molecular vibrations contributes to normal modes. The result of this analysis is expected to facilitate future improvement of DFT-D{sup *}.« less
Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S. Stanley
2016-01-01
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability. PMID:27213413
Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S Stanley
2016-05-18
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability.
Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition
NASA Astrophysics Data System (ADS)
Hong, Sang-Hoon; Wdowinski, Shimon
2013-08-01
Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.
Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora
2011-04-01
The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Lombard, C. K.
1982-01-01
A conservative flux difference splitting is presented for the hyperbolic systems of gasdynamics. The stable robust method is suitable for wide application in a variety of schemes, explicit or implicit, iterative or direct, for marching in either time or space. The splitting is modeled on the local quasi one dimensional characteristics system for multi-dimensional flow similar to Chakravarthy's nonconservative split coefficient matrix method; but, as the result of maintaining global conservation, the method is able to capture sharp shocks correctly. The embedded characteristics formulation is cast in a primitive variable the volumetric internal energy (rather than the pressure) that is effective for treating real as well as perfect gases. Finally the relationship of the splitting to characteristics boundary conditions is discussed and the associated conservative matrix formulation for a computed blown wall boundary condition is developed as an example. The theoretical development employs and extends the notion of Roe of constructing stable upwind difference formulae by sending split simple one sided flux difference pieces to appropriate mesh sites. The developments are also believed to have the potential for aiding in the analysis of both existing and new conservative difference schemes.
Wang, Xiaoyue; Wang, Feng; Jiang, Yuji
2013-01-01
Decomposition of plant residues is largely mediated by soil-dwelling microorganisms whose activities are influenced by both climate conditions and properties of the soil. However, a comprehensive understanding of their relative importance remains elusive, mainly because traditional methods, such as soil incubation and environmental surveys, have a limited ability to differentiate between the combined effects of climate and soil. Here, we performed a large-scale reciprocal soil transplantation experiment, whereby microbial communities associated with straw decomposition were examined in three initially identical soils placed in parallel in three climate regions of China (red soil, Chao soil, and black soil, located in midsubtropical, warm-temperate, and cold-temperate zones). Maize straws buried in mesh bags were sampled at 0.5, 1, and 2 years after the burial and subjected to chemical, physical, and microbiological analyses, e.g., phospholipid fatty acid analysis for microbial abundance, community-level physiological profiling, and 16S rRNA gene denaturing gradient gel electrophoresis, respectively, for functional and phylogenic diversity. Results of aggregated boosted tree analysis show that location rather soil is the primary determining factor for the rate of straw decomposition and structures of the associated microbial communities. Principal component analysis indicates that the straw communities are primarily grouped by location at any of the three time points. In contrast, microbial communities in bulk soil remained closely related to one another for each soil. Together, our data suggest that climate (specifically, geographic location) has stronger effects than soil on straw decomposition; moreover, the successive process of microbial communities in soils is slower than those in straw residues in response to climate changes. PMID:23524671
NASA Astrophysics Data System (ADS)
Perrin, Agnes; Kwabia Tchana, F.; Flaud, Jean-Marie; Manceron, Laurent; Demaison, Jean; Vogt, Natalja; Groner, Peter; Lafferty, Walter
2015-06-01
A high resolution (0.0015 wn) IR spectrum of propane, C_3H_8, has been recorded with synchrotron radiation at the French light source facility at SOLEIL coupled to a Bruker IFS-125 Fourier transform spectrometer. A preliminary analysis of the ν21 fundamental band (B1, CH3 rock) near 921.4 wn reveals that the rotational energy levels of 211 are split by interactions with the internal rotations of the methyl groups. Conventional analysis of this A-type band yielded band centers at 921.3724(38), 921.3821(33) and 921.3913(44) wn for the AA, EE and AE+EA tunneling splitting components, respectively. These torsional splittings most probably are due to anharmonic and/or Coriolis resonance coupling with nearby highly excited states of both internal rotations of the methyl groups. In addition, several vibrational-rotational resonances were observed that affect the torsional components in different ways. The analysis of the B-type band near 870 wn (ν8, sym. C-C stretch) which also contains split rovibrational transitions due to internal rotation is in progress. It is performed by using the effective rotational Hamiltonian method ERHAM with a code that allows prediction and least-squares fitting of such vibration-rotation spectra. A. Perrin et al., submitted to J. Mol. Spectrosc. P. Groner, J. Chem. Phys. 107 (1997) 4483; J. Mol. Spectrosc. 278 (2012) 52.
System Modeling of kJ-class Petawatt Lasers at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shverdin, M Y; Rushford, M; Henesian, M A
2010-04-14
Advanced Radiographic Capability (ARC) project at the National Ignition Facility (NIF) is designed to produce energetic, ultrafast x-rays in the range of 70-100 keV for backlighting NIF targets. The chirped pulse amplification (CPA) laser system will deliver kilo-Joule pulses at an adjustable pulse duration from 1 ps to 50 ps. System complexity requires sophisticated simulation and modeling tools for design, performance prediction, and comprehension of experimental results. We provide a brief overview of ARC, present our main modeling tools, and describe important performance predictions. The laser system (Fig. 1) consists of an all-fiber front end, including chirped-fiber Bragg grating (CFBG)more » stretchers. The beam after the final fiber amplifier is split into two apertures and spatially shaped. The split beam first seeds a regenerative amplifier and is then amplified in a multi-pass Nd:glass amplifier. Next, the preamplified chirped pulse is split in time into four identical replicas and injected into one NIF Quad. At the output of the NIF beamline, each of the eight amplified pulses is compressed in an individual, folded, four-grating compressor. Compressor grating pairs have slightly different groove densities to enable compact folding geometry and eliminate adjacent beam cross-talk. Pulse duration is adjustable with a small, rack-mounted compressor in the front-end. We use non-sequential ray-tracing software, FRED for design and layout of the optical system. Currently, our FRED model includes all of the optical components from the output of the fiber front end to the target center (Fig. 2). CAD designed opto-mechanical components are imported into our FRED model to provide a complete system description. In addition to incoherent ray tracing and scattering analysis, FRED uses Gaussian beam decomposition to model coherent beam propagation. Neglecting nonlinear effects, we can obtain a nearly complete frequency domain description of the ARC beam at different stages in the system. We employ 3D Fourier based propagation codes: MIRO, Virtual Beamline (VBL), and PROP for time-domain pulse analysis. These codes simulate nonlinear effects, calculate near and far field beam profiles, and account for amplifier gain. Verification of correct system set-up is a major difficulty to using these codes. VBL and PROP predictions have been extensively benchmarked to NIF experiments, and the verified descriptions of specific NIF beamlines are used for ARC. MIRO has the added capability of treating bandwidth specific effects of CPA. A sample MIRO model of the NIF beamline is shown in Fig. 3. MIRO models are benchmarked to VBL and PROP in the narrow bandwidth mode. Developing a variety of simulation tools allows us to cross-check predictions of different models and gain confidence in their fidelity. Preliminary experiments, currently in progress, are allowing us to validate and refine our models, and help guide future experimental campaigns.« less
Thermal decomposition of ammonium hexachloroosmate.
Asanova, T I; Kantor, I; Asanov, I P; Korenev, S V; Yusenko, K V
2016-12-07
Structural changes of (NH 4 ) 2 [OsCl 6 ] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH 4 ) 2 [OsCl 6 ] transforms directly to metallic Os without the formation of any crystalline intermediates but through a plateau where no reactions occur. XANES and EXAFS data by means of Multivariate Curve Resolution (MCR) analysis show that thermal decomposition occurs with the formation of an amorphous intermediate {OsCl 4 } x with a possible polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before.
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
May, I.; Rowe, J.J.
1965-01-01
A modified Morey bomb was designed which contains a removable nichromecased 3.5-ml platinium crucible. This bomb is particularly useful for decompositions of refractory samples for micro- and semimicro-analysis. Temperatures of 400-450?? and pressures estimated as great as 6000 p.s.i. were maintained in the bomb for periods as long as 24 h. Complete decompositions of rocks, garnet, beryl, chrysoberyl, phenacite, sapphirine, and kyanite were obtained with hydrofluoric acid or a mixture of hydrofluoric and sulfuric acids; the decomposition of chrome refractory was made with hydrochloric acid. Aluminum-rich samples formed difficultly soluble aluminum fluoride precipitates. Because no volatilization losses occur, silica can be determined on sample solutions by a molybdenum-blue procedure using aluminum(III) to complex interfering fluoride. ?? 1965.
Vortmann, Britta; Nowak, Sascha; Engelhard, Carsten
2013-03-19
Lithium ion batteries (LIBs) are key components for portable electronic devices that are used around the world. However, thermal decomposition products in the battery reduce its lifetime, and decomposition processes are still not understood. In this study, a rapid method for in situ analysis and reaction monitoring in LIB electrolytes is presented based on high-resolution mass spectrometry (HR-MS) with low-temperature plasma probe (LTP) ambient desorption/ionization for the first time. This proof-of-principle study demonstrates the capabilities of ambient mass spectrometry in battery research. LTP-HR-MS is ideally suited for qualitative analysis in the ambient environment because it allows direct sample analysis independent of the sample size, geometry, and structure. Further, it is environmental friendly because it eliminates the need of organic solvents that are typically used in separation techniques coupled to mass spectrometry. Accurate mass measurements were used to identify the time-/condition-dependent formation of electrolyte decomposition compounds. A LIB model electrolyte containing ethylene carbonate and dimethyl carbonate was analyzed before and after controlled thermal stress and over the course of several weeks. Major decomposition products identified include difluorophosphoric acid, monofluorophosphoric acid methyl ester, monofluorophosphoric acid dimethyl ester, and hexafluorophosphate. Solvents (i.e., dimethyl carbonate) were partly consumed via an esterification pathway. LTP-HR-MS is considered to be an attractive method for fundamental LIB studies.
Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.
Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan
2018-03-01
The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.
Flexible Mediation Analysis With Multiple Mediators.
Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn
2017-07-15
The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Yokoyama, Hikaru; Sato, Koji; Ogawa, Tetsuya; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka; Kawashima, Noritaka
2018-01-01
The adaptability of human bipedal locomotion has been studied using split-belt treadmill walking. Most of previous studies utilized experimental protocol under remarkably different split ratios (e.g. 1:2, 1:3, or 1:4). While, there is limited research with regard to adaptive process under the small speed ratios. It is important to know the nature of adaptive process under ratio smaller than 1:2, because systematic evaluation of the gait adaptation under small to moderate split ratios would enable us to examine relative contribution of two forms of adaptation (reactive feedback and predictive feedforward control) on gait adaptation. We therefore examined a gait behavior due to on split-belt treadmill adaptation under five belt speed difference conditions (from 1:1.2 to 1:2). Gait parameters related to reactive control (stance time) showed quick adjustments immediately after imposing the split-belt walking in all five speed ratios. Meanwhile, parameters related to predictive control (step length and anterior force) showed a clear pattern of adaptation and subsequent aftereffects except for the 1:1.2 adaptation. Additionally, the 1:1.2 ratio was distinguished from other ratios by cluster analysis based on the relationship between the size of adaptation and the aftereffect. Our findings indicate that the reactive feedback control was involved in all the speed ratios tested and that the extent of reaction was proportionally dependent on the speed ratio of the split-belt. On the contrary, predictive feedforward control was necessary when the ratio of the split-belt was greater. These results enable us to consider how a given split-belt training condition would affect the relative contribution of the two strategies on gait adaptation, which must be considered when developing rehabilitation interventions for stroke patients.
Modular analysis of biological networks.
Kaltenbach, Hans-Michael; Stelling, Jörg
2012-01-01
The analysis of complex biological networks has traditionally relied on decomposition into smaller, semi-autonomous units such as individual signaling pathways. With the increased scope of systems biology (models), rational approaches to modularization have become an important topic. With increasing acceptance of de facto modularity in biology, widely different definitions of what constitutes a module have sparked controversies. Here, we therefore review prominent classes of modular approaches based on formal network representations. Despite some promising research directions, several important theoretical challenges remain open on the way to formal, function-centered modular decompositions for dynamic biological networks.
NASA Astrophysics Data System (ADS)
Tobler, M.; White, D. A.; Abbene, M. L.; Burst, S. L.; McCulley, R. L.; Barnes, P. W.
2016-02-01
Decomposition is a crucial component of global biogeochemical cycles that influences the fate and residence time of carbon and nutrients in organic matter pools, yet the processes controlling litter decomposition in coastal marshes are not fully understood. We conducted a series of field studies to examine what role photodegradation, a process driven in part by solar UV radiation (280-400 nm), plays in the decomposition of the standing dead litter of Sagittaria lancifolia and Spartina patens, two common species in marshes of intermediate salinity in southern Louisiana, USA. Results indicate that the exclusion of solar UV significantly altered litter mass loss, but the magnitude and direction of these effects varied depending on species, height of the litter above the water surface and the stage of decomposition. Over one growing season, S. lancifolia litter exposed to ambient solar UV had significantly less mass loss compared to litter exposed to attenuated UV over the initial phase of decomposition (0-5 months; ANOVA P=0.004) then treatment effects switched in the latter phase of the study (5-7 months; ANOVA P<0.001). Similar results were found in S. patens over an 11-month period. UV exposure reduced total C, N and lignin by 24-33% in remaining tissue with treatment differences most pronounced in S. patens. Phospholipid fatty-acid analysis (PFLA) indicated that UV also significantly altered microbial (bacterial) biomass and bacteria:fungi ratios of decomposing litter. These findings, and others, indicate that solar UV can have positive and negative net effects on litter decomposition in marsh plants with inhibition of biotic (microbial) processes occurring early in the decomposition process then shifting to enhancement of decomposition via abiotic (photodegradation) processes later in decomposition. Photodegradation of standing litter represents a potentially significant pathway of C and N loss from these coastal wetland ecosystems.
Decomposition of the Total Effect in the Presence of Multiple Mediators and Interactions.
Bellavia, Andrea; Valeri, Linda
2018-06-01
Mediation analysis allows decomposing a total effect into a direct effect of the exposure on the outcome and an indirect effect operating through a number of possible hypothesized pathways. Recent studies have provided formal definitions of direct and indirect effects when multiple mediators are of interest and have described parametric and semiparametric methods for their estimation. Investigating direct and indirect effects with multiple mediators, however, can be challenging in the presence of multiple exposure-mediator and mediator-mediator interactions. In this paper we derive a decomposition of the total effect that unifies mediation and interaction when multiple mediators are present. We illustrate the properties of the proposed framework in a secondary analysis of a pragmatic trial for the treatment of schizophrenia. The decomposition is employed to investigate the interplay of side effects and psychiatric symptoms in explaining the effect of antipsychotic medication on quality of life in schizophrenia patients. Our result offers a valuable tool to identify the proportions of total effect due to mediation and interaction when more than one mediator is present, providing the finest decomposition of the total effect that unifies multiple mediators and interactions.
Zhang, Lisha; Zhang, Songhe; Lv, Xiaoyang; Qiu, Zheng; Zhang, Ziqiu; Yan, Liying
2018-08-15
This study investigated the alterations in biomass, nutrients and dissolved organic matter concentration in overlying water and determined the bacterial 16S rRNA gene in biofilms attached to plant residual during the decomposition of Myriophyllum verticillatum. The 55-day decomposition experimental results show that plant decay process can be well described by the exponential model, with the average decomposition rate of 0.037d -1 . Total organic carbon, total nitrogen, and organic nitrogen concentrations increased significantly in overlying water during decomposition compared to control within 35d. Results from excitation emission matrix-parallel factor analysis showed humic acid-like and tyrosine acid-like substances might originate from plant degradation processes. Tyrosine acid-like substances had an obvious correlation to organic nitrogen and total nitrogen (p<0.01). Decomposition rates were positively related to pH, total organic carbon, oxidation-reduction potential and dissolved oxygen but negatively related to temperature in overlying water. Microbe densities attached to plant residues increased with decomposition process. The most dominant phylum was Bacteroidetes (>46%) at 7d, Chlorobi (20%-44%) or Proteobacteria (25%-34%) at 21d and Chlorobi (>40%) at 55d. In microbes attached to plant residues, sugar- and polysaccharides-degrading genus including Bacteroides, Blvii28, Fibrobacter, and Treponema dominated at 7d while Chlorobaculum, Rhodobacter, Methanobacterium, Thiobaca, Methanospirillum and Methanosarcina at 21d and 55d. These results gain the insight into the dissolved organic matter release and bacterial community shifts during submerged macrophytes decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.
Study on the decomposition of trace benzene over V2O5-WO3 ...
Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet
The Split Virus Influenza Vaccine rapidly activates immune cells through Fcγ receptors.
O'Gorman, William E; Huang, Huang; Wei, Yu-Ling; Davis, Kara L; Leipold, Michael D; Bendall, Sean C; Kidd, Brian A; Dekker, Cornelia L; Maecker, Holden T; Chien, Yueh-Hsiu; Davis, Mark M
2014-10-14
Seasonal influenza vaccination is one of the most common medical procedures and yet the extent to which it activates the immune system beyond inducing antibody production is not well understood. In the United States, the most prevalent formulations of the vaccine consist of degraded or "split" viral particles distributed without any adjuvants. Based on previous reports we sought to determine whether the split influenza vaccine activates innate immune receptors-specifically Toll-like receptors. High-dimensional proteomic profiling of human whole-blood using Cytometry by Time-of-Flight (CyTOF) was used to compare signaling pathway activation and cytokine production between the split influenza vaccine and a prototypical TLR response ex vivo. This analysis revealed that the split vaccine rapidly and potently activates multiple immune cell types but yields a proteomic signature quite distinct from TLR activation. Importantly, vaccine induced activity was dependent upon the presence of human sera indicating that a serum factor was necessary for vaccine-dependent immune activation. We found this serum factor to be human antibodies specific for influenza proteins and therefore immediate immune activation by the split vaccine is immune-complex dependent. These studies demonstrate that influenza virus "splitting" inactivates any potential adjuvants endogenous to influenza, such as RNA, but in previously exposed individuals can elicit a potent immune response by facilitating the rapid formation of immune complexes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thermal properties of Bentonite Modified with 3-aminopropyltrimethoxysilane
NASA Astrophysics Data System (ADS)
Pramono, E.; Pratiwi, W.; Wahyuningrum, D.; Radiman, C. L.
2018-03-01
Chemical modifications of Bentonite (BNT) clay have been carried out by using 3-aminoprophyltrimethoxysilane (APS) in various solvent media. The degradation properties of products (BNTAPS) were characterized by thermogravimetric analysis (TGA). Samples were heated at 30 to 700°C with a heating rate 10 deg/min, and the total silane-grafted amount was determined by calculating the weight loss at 200 – 600°C. The thermogram of TGA showed that there were three main decomposition regions which are attributed to the elimination of physically adsorbed water, decomposition of silane and dehydroxylation of Bentonite. High weight loss attributed to the thermal decomposition of silane was observed between 200 to 550°C. Quantitative analysis of grafted silane results high silane loaded using a solvent with high surface energy, which indicates the type of solvent affected interaction and adsorption of APS in BNT platelets.
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
NASA Astrophysics Data System (ADS)
Ideue, T.; Checkelsky, J. G.; Bahramy, M. S.; Murakawa, H.; Kaneko, Y.; Nagaosa, N.; Tokura, Y.
2014-10-01
BiTeI is a polar semiconductor with gigantic Rashba spin-split bands in bulk. We have investigated the effect of pressure on the electronic structure of this material via magnetotransport. Periods of Shubunikov-de Haas (SdH) oscillations originating from the spin-split outer Fermi surface and inner Fermi surface show disparate responses to pressure, while the carrier number derived from the Hall effect is unchanged with pressure. The associated parameters which characterize the spin-split band structure are strongly dependent on pressure, reflecting the pressure-induced band deformation. We find the SdH oscillations and transport response are consistent with the theoretically proposed pressure-induced band deformation leading to a topological phase transition. Our analysis suggests the critical pressure for the quantum phase transition near Pc=3.5 GPa.
Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, M.; Mascheroni, M.; Woodard, A.
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distributemore » the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.« less
Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs
NASA Astrophysics Data System (ADS)
Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.
NASA Astrophysics Data System (ADS)
Makovetskii, A. N.; Tabatchikova, T. I.; Yakovleva, I. L.; Tereshchenko, N. A.; Mirzaev, D. A.
2013-06-01
The decomposition kinetics of austenite that appears in the 13KhFA low-alloyed pipe steel upon heating the samples in an intercritical temperature interval (ICI) and exposure for 5 or 30 min has been studied by the method of high-speed dilatometry. The results of dilatometry are supplemented by the microstructure analysis. Thermokinetic diagrams of the decomposition of the γ phase are represented. The conclusion has been drawn that an increase in the duration of exposure in the intercritical interval leads to a significant increase in the stability of the γ phase.
Relation between SM-covers and SM-decompositions of Petri nets
NASA Astrophysics Data System (ADS)
Karatkevich, Andrei; Wiśniewski, Remigiusz
2015-12-01
A task of finding for a given Petri net a set of sequential components being able to represent together the behavior of the net arises often in formal analysis of Petri nets and in applications of Petri net to logical control. Such task can be met in two different variants: obtaining a Petri net cover or a decomposition. Petri net cover supposes that a set of the subnets of given net is selected, and the sequential nets forming a decomposition may have additional places, which do not belong to the decomposed net. The paper discusses difference and relations between two mentioned tasks and their results.
NASA Technical Reports Server (NTRS)
Kemp, Richard H; Moseson, Merland L
1952-01-01
A full-scale J33 air-cooled split turbine rotor was designed and spin-pit tested to destruction. Stress analysis and spin-pit results indicated that the rotor in a J33 turbojet engine, however, showed that the rear disk of the rotor operated at temperatures substantially higher than the forward disk. An extension of the stress analysis to include the temperature difference between the two disks indicated that engine modifications are required to permit operation of the two disks at more nearly the same temperature level.
Tan, Linghua; Xu, Jianhua; Li, Shiying; Li, Dongnan; Dai, Yuming; Kou, Bo; Chen, Yu
2017-05-02
Novel graphitic carbon nitride/CuO (g-C₃N₄/CuO) nanocomposite was synthesized through a facile precipitation method. Due to the strong ion-dipole interaction between copper ions and nitrogen atoms of g-C₃N₄, CuO nanorods (length 200-300 nm, diameter 5-10 nm) were directly grown on g-C₃N₄, forming a g-C₃N₄/CuO nanocomposite, which was confirmed via X-ray diffraction (XRD), transmission electron microscopy (TEM), field emission scanning electron microscopy (FESEM), and X-ray photoelectron spectroscopy (XPS). Finally, thermal decomposition of ammonium perchlorate (AP) in the absence and presence of the prepared g-C₃N₄/CuO nanocomposite was examined by differential thermal analysis (DTA), and thermal gravimetric analysis (TGA). The g-C₃N₄/CuO nanocomposite showed promising catalytic effects for the thermal decomposition of AP. Upon addition of 2 wt % nanocomposite with the best catalytic performance (g-C₃N₄/20 wt % CuO), the decomposition temperature of AP was decreased by up to 105.5 °C and only one decomposition step was found instead of the two steps commonly reported in other examples, demonstrating the synergistic catalytic activity of the as-synthesized nanocomposite. This study demonstrated a successful example regarding the direct growth of metal oxide on g-C₃N₄ by ion-dipole interaction between metallic ions, and the lone pair electrons on nitrogen atoms, which could provide a novel strategy for the preparation of g-C₃N₄-based nanocomposite.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Equivalence of Fluctuation Splitting and Finite Volume for One-Dimensional Gas Dynamics
NASA Technical Reports Server (NTRS)
Wood, William A.
1997-01-01
The equivalence of the discretized equations resulting from both fluctuation splitting and finite volume schemes is demonstrated in one dimension. Scalar equations are considered for advection, diffusion, and combined advection/diffusion. Analysis of systems is performed for the Euler and Navier-Stokes equations of gas dynamics. Non-uniform mesh-point distributions are included in the analyses.
Drivelos, Spiros A; Higgins, Kevin; Kalivas, John H; Haroutounian, Serkos A; Georgiou, Constantinos A
2014-12-15
"Fava Santorinis", is a protected designation of origin (PDO) yellow split pea species growing only in the island of Santorini in Greece. Due to its nutritional quality and taste, it has gained a high monetary value. Thus, it is prone to adulteration with other yellow split peas. In order to discriminate "Fava Santorinis" from other yellow split peas, four classification methods utilising rare earth elements (REEs) measured through inductively coupled plasma-mass spectrometry (ICP-MS) are studied. The four classification processes are orthogonal projection analysis (OPA), Mahalanobis distance (MD), partial least squares discriminant analysis (PLS-DA) and k nearest neighbours (KNN). Since it is known that trace elements are often useful to determine geographical origin of food products, we further quantitated for trace elements using ICP-MS. Presented in this paper are results using the four classification processes based on the fusion of the REEs data with the trace element data. Overall, the OPA method was found to perform best with up to 100% accuracy using the fused data. Copyright © 2014 Elsevier Ltd. All rights reserved.
PROFILING GLYCOL-SPLIT HEPARINS BY HPLC/MS ANALYSIS OF THEIR HEPARINASE-GENERATED OLIGOSACCHARIDES1
Alekseeva, Anna; Casu, Benito; Torri, Giangiacomo; Pierro, Sabrina; Naggi, Annamaria
2012-01-01
Glycol-split (gs) heparins, obtained by periodate oxidation / borohydride reduction of heparin currently used as anticoagulant and antithrombotic drug, are arousing increasing interest in anti-cancer and anti-inflammation therapies. These new medical uses are favored by the loss of anticoagulant activity associated with glycol-splitting-induced inactivation of the antithrombin III (AT) binding site. The structure of gs-heparins has not been studied yet in detail. In this work, an ion-pair reversed-phase chromatography (IPRP-HPLC) coupled with electrospray ionization mass spectrometry (ESI-MS) widely used for unmodified heparin has been adapted to the analysis of oligosaccharides generated by digestion with heparinases of gs-heparins usually prepared from porcine mucosal heparin. The method has been also found very effective in analyzing glycol-split derivatives obtained from heparins of different animal and tissue origin. Besides the major 2-O-sulfated disaccharides, heparinase digests of gs-heparins mainly contain tetra- and hexasaccharides incorporating one or two gs residues, with distribution patterns typical for individual gs-heparins. A heptasulfated, mono-N-acetylated hexasaccharide with two gs residues has been shown to be a marker of the gs-modified AT binding site within heparin chains. PMID:23201389
NASA Astrophysics Data System (ADS)
Bassett, D.; Watts, A. B.; Sandwell, D. T.; Fialko, Y. A.
2016-12-01
We performed shear wave splitting analysis on 203 permanent (French RLPB, CEA and Catalonian networks) and temporary (PYROPE and IberArray experiments) broad-band stations around the Pyrenees. These measurements considerably enhance the spatial resolution and coverage of seismic anisotropy in that region. In particular, we characterize with different shear wave splitting analysis methods the small-scale variations of splitting parameters φ and δt along three dense transects crossing the western and central Pyrenees with an interstation spacing of about 7 km. While we find a relatively coherent seismic anisotropy pattern in the Pyrenean domain, we observe abrupt changes of splitting parameters in the Aquitaine Basin and delay times along the Pyrenees. We moreover observe coherent fast directions despite complex lithospheric structures in Iberia and the Massif Central. This suggests that two main sources of anisotropy are required to interpret seismic anisotropy in this region: (i) lithospheric fabrics in the Aquitaine Basin (probably frozen-in Hercynian anisotropy) and in the Pyrenees (early and late Pyrenean dynamics); (ii) asthenospheric mantle flow beneath the entire region (imprint of the western Mediterranean dynamics since the Oligocene).
NASA Astrophysics Data System (ADS)
He, Y. F.; Zhu, W.; Zhang, Q.; Zhang, W. T.
2018-04-01
InSAR technique can measure the surface deformation with the accuracy of centimeter-level or even millimeter and therefore has been widely used in the deformation monitoring associated with earthquakes, volcanoes, and other geologic process. However, ionospheric irregularities can lead to the wavy fringes in the low frequency SAR interferograms, which disturb the actual information of geophysical processes and thus put severe limitations on ground deformations measurements. In this paper, an application of two common methods, the range split-spectrum and azimuth offset methods are exploited to estimate the contributions of the ionosphere, with the aim to correct ionospheric effects in interferograms. Based on the theoretical analysis and experiment, a performance analysis is conducted to evaluate the efficiency of these two methods. The result indicates that both methods can mitigate the ionospheric effect in SAR interferograms and the range split-spectrum method is more precise than the other one. However, it is also found that the range split-spectrum is easily contaminated by the noise, and the achievable accuracy of the azimuth offset method is limited by the ambiguous integral constant, especially with the strong azimuth variations induced by the ionosphere disturbance.
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li
2012-08-13
A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.
NASA Astrophysics Data System (ADS)
Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.
2018-03-01
In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.
NASA Astrophysics Data System (ADS)
Williams, E. K.; Plante, A. F.
2017-12-01
The stability and cycling of natural organic matter depends on the input of energy needed to decompose it and the net energy gained from its decomposition. In soils, this relationship is complicated by microbial enzymatic activity which decreases the activation energies associated with soil organic matter (SOM) decomposition and by chemical and physical protection mechanisms which decreases the concentrations of the available organic matter substrate and also require additional energies to overcome for decomposition. In this study, we utilize differential scanning calorimetry and evolved CO2 gas analysis to characterize differences in the energetics (activation energy and energy density) in soils that have undergone degradation in natural (bare fallow), field (changes in land-use), chemical (acid hydrolysis), and laboratory (high temperature incubation) experimental conditions. We will present this data in a novel conceptual framework relating these energy dynamics to organic matter inputs, decomposition, and molecular complexity.
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
Kinetics of non-isothermal decomposition of cinnamic acid
NASA Astrophysics Data System (ADS)
Zhao, Ming-rui; Qi, Zhen-li; Chen, Fei-xiong; Yue, Xia-xin
2014-07-01
The thermal stability and kinetics of decomposition of cinnamic acid were investigated by thermogravimetry and differential scanning calorimetry at four heating rates. The activation energies of this process were calculated from analysis of TG curves by methods of Flynn-Wall-Ozawa, Doyle, Distributed Activation Energy Model, Šatava-Šesták and Kissinger, respectively. There are only one stage of thermal decomposition process in TG and two endothermic peaks in DSC. For this decomposition process of cinnamic acid, E and log A[s-1] were determined to be 81.74 kJ mol-1 and 8.67, respectively. The mechanism was Mampel Power law (the reaction order, n = 1), with integral form G(α) = α (α = 0.1-0.9). Moreover, thermodynamic properties of Δ H ≠, Δ S ≠, Δ G ≠ were 77.96 kJ mol-1, -90.71 J mol-1 K-1, 119.41 kJ mol-1.
NASA Astrophysics Data System (ADS)
Yong, Yingqiong; Nguyen, Mai Thanh; Tsukamoto, Hiroki; Matsubara, Masaki; Liao, Ying-Chih; Yonezawa, Tetsu
2017-03-01
Mixtures of a copper complex and copper fine particles as copper-based metal-organic decomposition (MOD) dispersions have been demonstrated to be effective for low-temperature sintering of conductive copper film. However, the copper particle size effect on decomposition process of the dispersion during heating and the effect of organic residues on the resistivity have not been studied. In this study, the decomposition process of dispersions containing mixtures of a copper complex and copper particles with various sizes was studied. The effect of organic residues on the resistivity was also studied using thermogravimetric analysis. In addition, the choice of copper salts in the copper complex was also discussed. In this work, a low-resistivity sintered copper film (7 × 10-6 Ω·m) at a temperature as low as 100 °C was achieved without using any reductive gas.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease
2016-09-01
Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing
Decomposition of the Inequality of Income Distribution by Income Types—Application for Romania
NASA Astrophysics Data System (ADS)
Andrei, Tudorel; Oancea, Bogdan; Richmond, Peter; Dhesi, Gurjeet; Herteliu, Claudiu
2017-09-01
This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.
ERIC Educational Resources Information Center
Man, Yiu-Kwong; Leung, Allen
2012-01-01
In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…
Long-term litter decomposition controlled by manganese redox cycling
Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus
2015-01-01
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954
[Progress in Raman spectroscopic measurement of methane hydrate].
Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun
2009-09-01
Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
Long-term litter decomposition controlled by manganese redox cycling.
Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus
2015-09-22
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.
Gardiner, Stuart K; Demirel, Shaban; De Moraes, Carlos Gustavo; Liebmann, Jeffrey M; Cioffi, George A; Ritch, Robert; Gordon, Mae O; Kass, Michael A
2013-02-15
Trend analysis techniques to detect glaucomatous progression typically assume a constant rate of change. This study uses data from the Ocular Hypertension Treatment Study to assess whether this assumption decreases sensitivity to changes in progression rate, by including earlier periods of stability. Series of visual fields (mean 24 per eye) completed at 6-month intervals from participants randomized initially to observation were split into subseries before and after the initiation of treatment (the "split-point"). The mean deviation rate of change (MDR) was derived using these entire subseries, and using only the window length (W) tests nearest the split-point, for different window lengths of W tests. A generalized estimating equation model was used to detect changes in MDR occurring at the split-point. Using shortened subseries with W = 7 tests, the MDR slowed by 0.142 dB/y upon initiation of treatment (P < 0.001), and the proportion of eyes showing "rapid deterioration" (MDR <-0.5 dB/y with P < 5%) decreased from 11.8% to 6.5% (P < 0.001). Using the entire sequence, no significant change in MDR was detected (P = 0.796), and there was no change in the proportion of eyes progressing (P = 0.084). Window lengths 6 ≤ W ≤ 9 produced similar benefits. Event analysis revealed a beneficial treatment effect in this dataset. This effect was not detected by linear trend analysis applied to entire series, but was detected when using shorter subseries of length between six and nine fields. Using linear trend analysis on the entire field sequence may not be optimal for detecting and monitoring progression. Nonlinear analyses may be needed for long series of fields. (ClinicalTrials.gov number, NCT00000125.).
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Zhang, Yong; Li, Yuan; Rong, Zhi-Guo
2010-06-01
Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.
Feng, Yunzi; Cai, Yu; Sun-Waterhouse, Dongxiao; Cui, Chun; Su, Guowan; Lin, Lianzhu; Zhao, Mouming
2015-11-15
Aroma extract dilution analysis (AEDA) is widely used for the screening of aroma-active compounds in gas chromatography-olfactometry (GC-O). In this study, three aroma dilution methods, (I) using different test sample volumes, (II) diluting samples, and (III) adjusting the GC injector split ratio, were compared for the analysis of volatiles by using HS-SPME-AEDA. Results showed that adjusting the GC injector split ratio (III) was the most desirable approach, based on the linearity relationships between Ln (normalised peak area) and Ln (normalised flavour dilution factors). Thereafter this dilution method was applied in the analysis of aroma-active compounds in Japanese soy sauce and 36 key odorants were found in this study. The most intense aroma-active components in Japanese soy sauce were: ethyl 2-methylpropanoate, ethyl 2-methylbutanoate, ethyl 3-methylbutanoate, ethyl 4-methylpentanoate, 3-(methylthio)propanal, 1-octen-3-ol, 2-methoxyphenol, 4-ethyl-2-methoxyphenol, 2-methoxy-4-vinylphenol, 2-phenylethanol, and 4-hydroxy-5-ethyl-2-methyl-3(2H)-furanone. Copyright © 2015. Published by Elsevier Ltd.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
Yang, Caiqin; Guo, Wei; Lin, Yulong; Lin, Qianqian; Wang, Jiaojiao; Wang, Jing; Zeng, Yanli
2018-05-30
In this study, a new cocrystal of felodipine (Fel) and glutaric acid (Glu) with a high dissolution rate was developed using the solvent ultrasonic method. The prepared cocrystal was characterized using X-ray powder diffraction, differential scanning calorimetry, thermogravimetric (TG) analysis, and infrared (IR) spectroscopy. To provide basic information about the optimization of pharmaceutical preparations of Fel-based cocrystals, this work investigated the thermal decomposition kinetics of the Fel-Glu cocrystal through non-isothermal thermogravimetry. Density functional theory (DFT) simulations were also performed on the Fel monomer and the trimolecular cocrystal compound for exploring the mechanisms underlying hydrogen bonding formation and thermal decomposition. Combined results of IR spectroscopy and DFT simulation verified that the Fel-Glu cocrystal formed via the NH⋯OC and CO⋯HO hydrogen bonds between Fel and Glu at the ratio of 1:2. The TG/derivative TG curves indicated that the thermal decomposition of the Fel-Glu cocrystal underwent a two-step process. The apparent activation energy (E a ) and pre-exponential factor (A) of the thermal decomposition for the first stage were 84.90 kJ mol -1 and 7.03 × 10 7 min -1 , respectively. The mechanism underlying thermal decomposition possibly involved nucleation and growth, with the integral mechanism function G(α) of α 3/2 . DFT calculation revealed that the hydrogen bonding between Fel and Glu weakened the terminal methoxyl, methyl, and ethyl groups in the Fel molecule. As a result, these groups were lost along with the Glu molecule in the first thermal decomposition. In conclusion, the formed cocrystal exhibited different thermal decomposition kinetics and showed different E a , A, and shelf life from the intact active pharmaceutical ingredient. Copyright © 2018 Elsevier B.V. All rights reserved.
Dynamics of Potassium Release and Adsorption on Rice Straw Residue
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K+. This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K+ release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K+ from the ambient environment, which was subject to decomposition periods and extra K+ concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K+ ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g−1, and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K+ indirectly during the initial decomposition period. These crop residues could also directly adsorb K+ via physical and chemical adsorption in the later period, allowing part of this K+ to be absorbed by plants for the next growing season. PMID:24587364
Dynamics of potassium release and adsorption on rice straw residue.
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K(+). This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K(+) release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K(+) from the ambient environment, which was subject to decomposition periods and extra K(+) concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K(+) ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g(-1), and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K(+) indirectly during the initial decomposition period. These crop residues could also directly adsorb K(+) via physical and chemical adsorption in the later period, allowing part of this K(+) to be absorbed by plants for the next growing season.
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
NASA Astrophysics Data System (ADS)
Qiu, Weiqia; Zhou, Junjie; Yu, Jianhui; Xiao, Yi; Lu, Huihui; Guan, Heyuan; Zhong, Yongchun; Zhang, Jun; Chen, Zhe
2016-06-01
We established a theoretical model for a single knot-ring resonator and investigated the transmission spectrum by Jones matrix. The numerical results show that two orthogonal polarization modes of knot-ring, which are originally resonated at the same wavelength, will split into two resonant modes with different wavelengths. The mode splitting is due to the coupling between the two orthogonal polarization modes in the knot-ring when the twisted angle of the twist coupler is not exactly equal to 2mπ (m is an integer). It is also found that the separation of the mode splitting is linearly proportional to the deviation angle δθ with a high correlation coefficient of 99.6% and a slope of 3.17 nm/rad. Furthermore, a transparency phenomenon analogous to coupled-resonator-induced transparency was also predicted by the model. These findings may have potential applications in lasers and sensors.
NASA Astrophysics Data System (ADS)
Green, J. A.; Gray, M. D.; Robishaw, T.; Caswell, J. L.; McClure-Griffiths, N. M.
2014-06-01
Recent comparisons of magnetic field directions derived from maser Zeeman splitting with those derived from continuum source rotation measures have prompted new analysis of the propagation of the Zeeman split components, and the inferred field orientation. In order to do this, we first review differing electric field polarization conventions used in past studies. With these clearly and consistently defined, we then show that for a given Zeeman splitting spectrum, the magnetic field direction is fully determined and predictable on theoretical grounds: when a magnetic field is oriented away from the observer, the left-hand circular polarization is observed at higher frequency and the right-hand polarization at lower frequency. This is consistent with classical Lorentzian derivations. The consequent interpretation of recent measurements then raises the possibility of a reversal between the large-scale field (traced by rotation measures) and the small-scale field (traced by maser Zeeman splitting).
Resolving Some Paradoxes in the Thermal Decomposition Mechanism of Acetaldehyde
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivaramakrishnan, Raghu; Michael, Joe V.; Harding, Lawrence B.
2015-07-16
The mechanism for the thermal decomposition of acetaldehyde has been revisited with an analysis of literature kinetics experiments using theoretical kinetics. The present modeling study was motivated by recent observations, with very sensitive diagnostics, of some unexpected products in high temperature micro-tubular reactor experiments on the thermal decomposition of CH3CHO and its deuterated analogs, CH3CDO, CD3CHO, and CD3CDO. The observations of these products prompted the authors of these studies to suggest that the enol tautomer, CH2CHOH (vinyl alcohol), is a primary intermediate in the thermal decomposition of acetaldehyde. The present modeling efforts on acetaldehyde decomposition incorporate a master equation re-analysismore » of the CH3CHO potential energy surface (PES). The lowest energy process on this PES is an isomerization of CH3CHO to CH2CHOH. However, the subsequent product channels for CH2CHOH are substantially higher in energy, and the only unimolecular process that can be thermally accessed is a re-isomerization to CH3CHO. The incorporation of these new theoretical kinetics predictions into models for selected literature experiments on CH3CHO thermal decomposition confirms our earlier experiment and theory based conclusions that the dominant decomposition process in CH3CHO at high temperatures is C-C bond fission with a minor contribution (~10-20%) from the roaming mechanism to form CH4 and CO. The present modeling efforts also incorporate a master-equation analysis of the H + CH2CHOH potential energy surface. This bimolecular reaction is the primary mechanism for removal of CH2CHOH, which can accumulate to minor amounts at high temperatures, T > 1000 K, in most lab-scale experiments that use large initial concentrations of CH3CHO. Our modeling efforts indicate that the observation of ketene, water and acetylene in the recent micro-tubular experiments are primarily due to bimolecular reactions of CH3CHO and CH2CHOH with H-atoms, and have no bearing on the unimolecular decomposition mechanism of CH3CHO. The present simulations also indicate that experiments using these micro-tubular reactors when interpreted with the aid of high-level theoretical calculations and kinetics modeling can offer insights into the chemistry of elusive intermediates in high temperature pyrolysis of organic molecules.« less
Organic and inorganic decomposition products from the thermal desorption of atmospheric particles
NASA Astrophysics Data System (ADS)
Williams, B. J.; Zhang, Y.; Zuo, X.; Martinez, R. E.; Walker, M. J.; Kreisberg, N. M.; Goldstein, A. H.; Docherty, K. S.; Jimenez, J. L.
2015-12-01
Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality, and often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a GC column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer (MS). Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, re-desorptions of the CTD cell following ambient sample analysis shows some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.
Organic and inorganic decomposition products from the thermal desorption of atmospheric particles
NASA Astrophysics Data System (ADS)
Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; Martinez, Raul E.; Walker, Michael J.; Kreisberg, Nathan M.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.
2016-04-01
Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.
Organic and inorganic decomposition products from the thermal desorption of atmospheric particles
Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; ...
2016-04-11
Here, atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completionmore » of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO + ( m/z 30), NO 2 + ( m/z 46), SO + ( m/z 48), and SO 2 + ( m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO 2 + ( m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.« less
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
Chemical (knight) shift distortions of quadrupole-split deuteron powder spectra in solids
NASA Astrophysics Data System (ADS)
Torgeson, D. R.; Schoenberger, R. J.; Barnes, R. G.
In strong magnetic fields (e.g., 8 Tesla) anisotropy of the shift tensor (chemical or Knight shift) can alter the spacings of the features of quadrupole-split deuteron spectra of polycrystalline samples. Analysis of powder spectra yields both correct quadrupole coupling and symmetry parameters and all the components of the shift tensor. Synthetic and experimental examples are given to illustrate such behavior.
An asymptotic induced numerical method for the convection-diffusion-reaction equation
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.; Sorensen, Danny C.
1988-01-01
A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.
1985-06-01
12. It was stated that analysis of the gaseous products showed that they consisted of N2O, NO, N2, CO, CO2, F^CO and traces of N,* The products of...IR, UV and mass spectrometry. These were (yields summarized in Table 1) as follows: No 1 N2O, NO, CO2, CO, HCN, CH2O, and I^O. NO2 and a trace ...Ramirez, "Reaction of Gem-Nitronitroso Compounds with Triethyl Phosphite ," Tetrahedron, Vol. 29, p. 4195, 1973. J. Jappy and P.N. Preston
Raman analysis of non stoichiometric Ni1-δO
NASA Astrophysics Data System (ADS)
Dubey, Paras; Choudhary, K. K.; Kaurav, Netram
2018-04-01
Thermal decomposition method was used to synthesize non-stoichiometric nickel oxide at different sintering temperatures upto 1100 °C. The structure of synthesized compounds were analyzed by X ray diffraction analysis (XRD) and magnetic ordering was studied with the help of Raman scattering spectroscopy for the samples sintered at different temperature. It was found that due to change in sintering temperature the stoichiometry of the sample changes and hence intensity of two magnon band changes. These results were interpreted as the decomposition temperature increases, which heals the defects present in the non-stoichiometric nickel oxide and antiferromagnetic spin correlation changes accordingly.
Empirical mode decomposition for analyzing acoustical signals
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.
Basu, Sanjay; Hong, Anthony; Siddiqi, Arjumand
2015-08-15
To lower the prevalence of hypertension and racial disparities in hypertension, public health agencies have attempted to reduce modifiable risk factors for high blood pressure, such as excess sodium intake or high body mass index. In the present study, we used decomposition methods to identify how population-level reductions in key risk factors for hypertension could reshape entire population distributions of blood pressure and associated disparities among racial/ethnic groups. We compared blood pressure distributions among non-Hispanic white, non-Hispanic black, and Mexican-American persons using data from the US National Health and Nutrition Examination Survey (2003-2010). When using standard adjusted logistic regression analysis, we found that differences in body mass index were the only significant explanatory correlate to racial disparities in blood pressure. By contrast, our decomposition approach provided more nuanced revelations; we found that disparities in hypertension related to tobacco use might be masked by differences in body mass index that significantly increase the disparities between black and white participants. Analysis of disparities between white and Mexican-American participants also reveal hidden relationships between tobacco use, body mass index, and blood pressure. Decomposition offers an approach to understand how modifying risk factors might alter population-level health disparities in overall outcome distributions that can be obscured by standard regression analyses. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide
NASA Astrophysics Data System (ADS)
Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun
2014-07-01
This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.
Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide.
Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun
2014-07-01
This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.
New spectrophotometric assay for pilocarpine.
El-Masry, S; Soliman, R
1980-07-01
A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.
Solid-state reaction kinetics of neodymium doped magnesium hydrogen phosphate system
NASA Astrophysics Data System (ADS)
Gupta, Rashmi; Slathia, Goldy; Bamzai, K. K.
2018-05-01
Neodymium doped magnesium hydrogen phosphate (NdMHP) crystals were grown by using gel encapsulation technique. Structural characterization of the grown crystals has been carried out by single crystal X-ray diffraction (XRD) and it revealed that NdMHP crystals crystallize in orthorhombic crystal system with space group Pbca. Kinetics of the decomposition of the grown crystals has been studied by non-isothermal analysis. The estimation of decomposition temperatures and weight loss has been made from the thermogravimetric/differential thermo analytical (TG/DTA) in conjuncture with DSC studies. The various steps involved in the thermal decomposition of the material have been analysed using Horowitz-Metzger, Coats-Redfern and Piloyan-Novikova equations for evaluating various kinetic parameters.
NASA Astrophysics Data System (ADS)
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
How Sommerfeld extended Bohr's model of the atom (1913-1916)
NASA Astrophysics Data System (ADS)
Eckert, Michael
2014-04-01
Sommerfeld's extension of Bohr's atomic model was motivated by the quest for a theory of the Zeeman and Stark effects. The crucial idea was that a spectral line is made up of coinciding frequencies which are decomposed in an applied field. In October 1914 Johannes Stark had published the results of his experimental investigation on the splitting of spectral lines in hydrogen (Balmer lines) in electric fields, which showed that the frequency of each Balmer line becomes decomposed into a multiplet of frequencies. The number of lines in such a decomposition grows with the index of the line in the Balmer series. Sommerfeld concluded from this observation that the quantization in Bohr's model had to be altered in order to allow for such decompositions. He outlined this idea in a lecture in winter 1914/15, but did not publish it. The First World War further delayed its elaboration. When Bohr published new results in autumn 1915, Sommerfeld finally developed his theory in a provisional form in two memoirs which he presented in December 1915 and January 1916 to the Bavarian Academy of Science. In July 1916 he published the refined version in the Annalen der Physik. The focus here is on the preliminary Academy memoirs whose rudimentary form is better suited for a historical approach to Sommerfeld's atomic theory than the finished Annalen-paper. This introductory essay reconstructs the historical context (mainly based on Sommerfeld's correspondence). It will become clear that the extension of Bohr's model did not emerge in a singular stroke of genius but resulted from an evolving process.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
Multiplex detection of protein-protein interactions using a next generation luciferase reporter.
Verhoef, Lisette G G C; Mattioli, Michela; Ricci, Fernanda; Li, Yao-Cheng; Wade, Mark
2016-02-01
Cell-based assays of protein-protein interactions (PPIs) using split reporter proteins can be used to identify PPI agonists and antagonists. Generally, such assays measure one PPI at a time, and thus counterscreens for on-target activity must be run in parallel or at a subsequent stage; this increases both the cost and time during screening. Split luciferase systems offer advantages over those that use split fluorescent proteins (FPs). This is since split luciferase offers a greater signal:noise ratio and, unlike split FPs, the PPI can be reversed upon small molecule treatment. While multiplexed PPI assays using luciferase have been reported, they suffer from low signal:noise and require fairly complex spectral deconvolution during analysis. Furthermore, the luciferase enzymes used are large, which limits the range of PPIs that can be interrogated due to steric hindrance from the split luciferase fragments. Here, we report a multiplexed PPI assay based on split luciferases from Photinus pyralis (firefly luciferase, FLUC) and the deep-sea shrimp, Oplophorus gracilirostris (NanoLuc, NLUC). Specifically, we show that the binding of the p53 tumor suppressor to its two major negative regulators, MDM2 and MDM4, can be simultaneously measured within the same sample, without the requirement for complex filters or deconvolution. We provide chemical and genetic validation of this system using MDM2-targeted small molecules and mutagenesis, respectively. Combined with the superior signal:noise and smaller size of split NanoLuc, this multiplexed PPI assay format can be exploited to study the induction or disruption of pairwise interactions that are prominent in many cell signaling pathways. Copyright © 2015 Elsevier B.V. All rights reserved.
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
Analysis of HEMCL Railgun Insulator Damage
2006-06-01
pyrolytic epoxy degradation and glass fiber softening and liquification in the insulator, it is determined that rail-to-rail plasmas are present behind...produces epoxy decomposition products in the form of gases, oils , waxes and chars solid (heavily cross-linked residues) [4]. The nature of the... pyrolytic decomposition product (wax) of the epoxy as in the fired specimens. Figures 6 and 7 are typical examples of glass fiber softening and
NASA Astrophysics Data System (ADS)
Yang, Hee-Chul; Kim, Hyung-Ju; Lee, Si-Young; Yang, In-Hwan; Chung, Dong-Yong
2017-06-01
The thermochemical properties of uranium compounds have attracted much interest in relation to thermochemical treatments and the safe disposal of radioactive waste bearing uranium compounds. The characteristics of the thermal decomposition of uranium metaphosphate, U(PO3)4, into uranium pyrophosphate, UP2O7, have been studied from the view point of reaction kinetics and acting mechanisms. A mixture of U(PO3)4 and UP2O7 was prepared from the pyrolysis residue of uranium-bearing spent TBP. A kinetic analysis of the reaction of U(PO3)4 into UP2O7 was conducted using an isoconversional method and a master plot method on the basis of data from a non-isothermal thermogravimetric analysis. The thermal decomposition of U(PO3)4 into UP2O7 followed a single-step reaction with an activation energy of 175.29 ± 1.58 kJ mol-1. The most probable kinetic model was determined as a type of nucleation and nuclei-growth models, the Avrami-Erofeev model (A3), which describes that there are certain restrictions on nuclei growth of UP2O7 during the solid-state decomposition of U(PO3)4.
FACETS: multi-faceted functional decomposition of protein interaction networks.
Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes
2012-10-15
The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein-protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Supplementary data are available at the Bioinformatics online. Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/~assourav/Facets/
Fluid dynamic propagation of initial baryon number perturbations on a Bjorken flow background
Floerchinger, Stefan; Martinez, Mauricio
2015-12-11
Baryon number density perturbations offer a possible route to experimentally measure baryon number susceptibilities and heat conductivity of the quark gluon plasma. We study the fluid dynamical evolution of local and event-by-event fluctuations of baryon number density, flow velocity, and energy density on top of a (generalized) Bjorken expansion. To that end we use a background-fluctuation splitting and a Bessel-Fourier decomposition for the fluctuating part of the fluid dynamical fields with respect to the azimuthal angle, the radius in the transverse plane, and rapidity. Here, we examine how the time evolution of linear perturbations depends on the equation of statemore » as well as on shear viscosity, bulk viscosity, and heat conductivity for modes with different azimuthal, radial, and rapidity wave numbers. Finally we discuss how this information is accessible to experiments in terms of the transverse and rapidity dependence of correlation functions for baryonic particles in high energy nuclear collisions.« less
NASA Astrophysics Data System (ADS)
Conway, John T.; Cohl, Howard S.
2010-06-01
A new method is presented for Fourier decomposition of the Helmholtz Green function in cylindrical coordinates, which is equivalent to obtaining the solution of the Helmholtz equation for a general ring source. The Fourier coefficients of the Green function are split into their half advanced + half retarded and half advanced-half retarded components, and closed form solutions for these components are then obtained in terms of a Horn function and a Kampé de Fériet function respectively. Series solutions for the Fourier coefficients are given in terms of associated Legendre functions, Bessel and Hankel functions and a hypergeometric function. These series are derived either from the closed form 2-dimensional hypergeometric solutions or from an integral representation, or from both. A simple closed form far-field solution for the general Fourier coefficient is derived from the Hankel series. Numerical calculations comparing different methods of calculating the Fourier coefficients are presented. Fourth order ordinary differential equations for the Fourier coefficients are also given and discussed briefly.
Method of generating hydrogen by catalytic decomposition of water
Balachandran, Uthamalingam; Dorris, Stephen E.; Bose, Arun C.; Stiegel, Gary J.; Lee, Tae-Hyun
2002-01-01
A method for producing hydrogen includes providing a feed stream comprising water; contacting at least one proton conducting membrane adapted to interact with the feed stream; splitting the water into hydrogen and oxygen at a predetermined temperature; and separating the hydrogen from the oxygen. Preferably the proton conducting membrane comprises a proton conductor and a second phase material. Preferable proton conductors suitable for use in a proton conducting membrane include a lanthanide element, a Group VIA element and a Group IA or Group IIA element such as barium, strontium, or combinations of these elements. More preferred proton conductors include yttrium. Preferable second phase materials include platinum, palladium, nickel, cobalt, chromium, manganese, vanadium, silver, gold, copper, rhodium, ruthenium, niobium, zirconium, tantalum, and combinations of these. More preferably second phase materials suitable for use in a proton conducting membrane include nickel, palladium, and combinations of these. The method for generating hydrogen is preferably preformed in the range between about 600.degree. C. and 1,700.degree. C.
Combined Molecular and Spin Dynamics Simulation of Lattice Vacancies in BCC Iron
NASA Astrophysics Data System (ADS)
Mudrick, Mark; Perera, Dilina; Eisenbach, Markus; Landau, David P.
Using an atomistic model that treats translational and spin degrees of freedom equally, combined molecular and spin dynamics simulations have been performed to study dynamic properties of BCC iron at varying levels of defect impurity. Atomic interactions are described by an empirical many-body potential, and spin interactions with a Heisenberg-like Hamiltonian with a coordinate dependent exchange interaction. Equations of motion are solved numerically using the second-order Suzuki-Trotter decomposition for the time evolution operator. We analyze the spatial and temporal correlation functions for atomic displacements and magnetic order to obtain the effect of vacancy defects on the phonon and magnon excitations. We show that vacancy clusters in the material cause splitting of the characteristic transverse spin-wave excitations, indicating the production of additional excitation modes. Additionally, we investigate the coupling of the atomic and magnetic modes. These modes become more distinct with increasing vacancy cluster size. This material is based upon work supported by the U.S. Department of Energy Office of Science Graduate Student Research (SCGSR) program.
Baldrian, Petr; Kolařík, Miroslav; Stursová, Martina; Kopecký, Jan; Valášková, Vendula; Větrovský, Tomáš; Zifčáková, Lucia; Snajdr, Jaroslav; Rídl, Jakub; Vlček, Cestmír; Voříšková, Jana
2012-02-01
Soils of coniferous forest ecosystems are important for the global carbon cycle, and the identification of active microbial decomposers is essential for understanding organic matter transformation in these ecosystems. By the independent analysis of DNA and RNA, whole communities of bacteria and fungi and its active members were compared in topsoil of a Picea abies forest during a period of organic matter decomposition. Fungi quantitatively dominate the microbial community in the litter horizon, while the organic horizon shows comparable amount of fungal and bacterial biomasses. Active microbial populations obtained by RNA analysis exhibit similar diversity as DNA-derived populations, but significantly differ in the composition of microbial taxa. Several highly active taxa, especially fungal ones, show low abundance or even absence in the DNA pool. Bacteria and especially fungi are often distinctly associated with a particular soil horizon. Fungal communities are less even than bacterial ones and show higher relative abundances of dominant species. While dominant bacterial species are distributed across the studied ecosystem, distribution of dominant fungi is often spatially restricted as they are only recovered at some locations. The sequences of cbhI gene encoding for cellobiohydrolase (exocellulase), an essential enzyme for cellulose decomposition, were compared in soil metagenome and metatranscriptome and assigned to their producers. Litter horizon exhibits higher diversity and higher proportion of expressed sequences than organic horizon. Cellulose decomposition is mediated by highly diverse fungal populations largely distinct between soil horizons. The results indicate that low-abundance species make an important contribution to decomposition processes in soils.
En face spectral domain optical coherence tomography analysis of lamellar macular holes.
Clamp, Michael F; Wilkes, Geoff; Leis, Laura S; McDonald, H Richard; Johnson, Robert N; Jumper, J Michael; Fu, Arthur D; Cunningham, Emmett T; Stewart, Paul J; Haug, Sara J; Lujan, Brandon J
2014-07-01
To analyze the anatomical characteristics of lamellar macular holes using cross-sectional and en face spectral domain optical coherence tomography. Forty-two lamellar macular holes were retrospectively identified for analysis. The location, cross-sectional length, and area of lamellar holes were measured using B-scans and en face imaging. The presence of photoreceptor inner segment/outer segment disruption and the presence or absence of epiretinal membrane formation were recorded. Forty-two lamellar macular holes were identified. Intraretinal splitting occurred within the outer plexiform layer in 97.6% of eyes. The area of intraretinal splitting in lamellar holes did not correlate with visual acuity. Eyes with inner segment/outer segment disruption had significantly worse mean logMAR visual acuity (0.363 ± 0.169; Snellen = 20/46) than in eyes without inner segment/outer segment disruption (0.203 ± 0.124; Snellen = 20/32) (analysis of variance, P = 0.004). Epiretinal membrane was present in 34 of 42 eyes (81.0%). En face imaging allowed for consistent detection and quantification of intraretinal splitting within the outer plexiform layer in patients with lamellar macular holes, supporting the notion that an area of anatomical weakness exists within Henle's fiber layer, presumably at the synaptic connection of these fibers within the outer plexiform layer. However, the en face area of intraretinal splitting did not correlate with visual acuity, disruption of the inner segment/outer segment junction was associated with significantly worse visual acuity in patients with lamellar macular holes.
NASA Astrophysics Data System (ADS)
Fouquet, Thierry N. J.; Cody, Robert B.; Ozeki, Yuka; Kitagawa, Shinya; Ohtani, Hajime; Sato, Hiroaki
2018-05-01
The Kendrick mass defect (KMD) analysis of multiply charged polymeric distributions has recently revealed a surprising isotopic split in their KMD plots—namely a 1/z difference between KMDs of isotopes of an oligomer at charge state z. Relying on the KMD analysis of actual and simulated distributions of poly(ethylene oxide) (PEO), the isotopic split is mathematically accounted for and found to go with an isotopic misalignment in certain cases. It is demonstrated that the divisibility (resp. indivisibility) of the nominal mass of the repeating unit (R) by z is the condition for homolog ions to line up horizontally (resp. misaligned obliquely) in a KMD plot. Computing KMDs using a fractional base unit R/z eventually corrects the misalignments for the associated charge state while using the least common multiple of all the charge states as the divisor realigns all the points at once. The isotopic split itself can be removed by using either a new charge-dependent KMD plot compatible with any fractional base unit or the remainders of KM (RKM) recently developed for low-resolution data all found to be linked in a unified theory. These original applications of the fractional base units and the RKM plots are of importance theoretically to satisfy the basics of a mass defect analysis and practically for a correct data handling of single stage and tandem mass spectra of multiply charged homo- and copolymers.
Ogawa, Tetsuya; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka
2018-01-01
The adaptability of human bipedal locomotion has been studied using split-belt treadmill walking. Most of previous studies utilized experimental protocol under remarkably different split ratios (e.g. 1:2, 1:3, or 1:4). While, there is limited research with regard to adaptive process under the small speed ratios. It is important to know the nature of adaptive process under ratio smaller than 1:2, because systematic evaluation of the gait adaptation under small to moderate split ratios would enable us to examine relative contribution of two forms of adaptation (reactive feedback and predictive feedforward control) on gait adaptation. We therefore examined a gait behavior due to on split-belt treadmill adaptation under five belt speed difference conditions (from 1:1.2 to 1:2). Gait parameters related to reactive control (stance time) showed quick adjustments immediately after imposing the split-belt walking in all five speed ratios. Meanwhile, parameters related to predictive control (step length and anterior force) showed a clear pattern of adaptation and subsequent aftereffects except for the 1:1.2 adaptation. Additionally, the 1:1.2 ratio was distinguished from other ratios by cluster analysis based on the relationship between the size of adaptation and the aftereffect. Our findings indicate that the reactive feedback control was involved in all the speed ratios tested and that the extent of reaction was proportionally dependent on the speed ratio of the split-belt. On the contrary, predictive feedforward control was necessary when the ratio of the split-belt was greater. These results enable us to consider how a given split-belt training condition would affect the relative contribution of the two strategies on gait adaptation, which must be considered when developing rehabilitation interventions for stroke patients. PMID:29694404
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Long-term litter decomposition controlled by manganese redox cycling
Keiluweit, Marco; Nico, Peter S.; Harmon, Mark; ...
2015-09-08
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of littermore » was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn 2+ provided by fresh plant litter to produce oxidative Mn 3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn 3+/4+ oxides. Formation of reactive Mn 3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn 3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn 3+ species in the litter layer. As a result, this observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates.« less
Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence
NASA Astrophysics Data System (ADS)
Hatch, David R.
This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.
Reznick, Julia; Friedmann, Naama
2015-01-01
This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive Harmonic Balance Method for Unsteady, Nonlinear, One-Dimensional Periodic Flows
2002-09-01
Design and Implemen- tation. May 1999. REF-2 23. Toro , Eleuterio F . Fiemann Solvers and Numerical Methods for Fluid Dynamics, chapter 15. New York...prominent for high-frequency unsteady-flows. Experimental Analysis of Splitting-induced Error To assess the actual effect of splitting error on a...VITA-1 vi List of Figures Figure Page 1.1. Experimental Pressure Data on Inlet Guide Vane Upstream of Transonic Rotating
Singer, S S
1985-08-01
(Hydroxyalkyl)nitrosoureas and the related cyclic carbamates N-nitrosooxazolidones are potent carcinogens. The decompositions of four such compounds, 1-nitroso-1-(2-hydroxyethyl)urea (I), 3-nitrosooxazolid-2-one (II), 1-nitroso-1-(2-hydroxypropyl)urea (III), and 5-methyl-3-nitrosooxazolid-2-one (IV), in aqueous buffers at physiological pH were studied to determine if any obvious differences in decomposition pathways could account for the variety of tumors obtained from these four compounds. The products predicted by the literature mechanisms for nitrosourea and nitrosooxazolidone decompositions (which were derived from experiments at pH 10-12) were indeed the products formed, including glycols, active carbonyl compounds, epoxides, and, from the oxazolidones, cyclic carbonates. Furthermore, it was shown that in pH 6.4-7.4 buffer epoxides were stable reaction products. However, in the presence of hepatocytes, most of the epoxide was converted to glycol. The analytical methods developed were then applied to the analysis of the decomposition products of some related dialkylnitrosoureas, and similar results were obtained. The formation of chemically reactive secondary products and the possible relevance of these results to carcinogenesis studies are discussed.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Gui, Heng; Hyde, Kevin; Xu, Jianchu; Mortimer, Peter
2017-01-01
Although there is a growing amount of evidence that arbuscular mycorrhizal fungi (AMF) influence the decomposition process, the extent of their involvement remains unclear. Therefore, given this knowledge gap, our aim was to test how AMF influence the soil decomposer communities. Dual compartment microcosms, where AMF (Glomus mosseae) were either allowed access (AM+) to or excluded (AM−) from forest soil compartments containing litterbags (leaf litter from Calophyllum polyanthum) were used. The experiment ran for six months, with destructive harvests at 0, 90, 120, 150, and 180 days. For each harvest we measured AMF colonization, soil nutrients, litter mass loss, and microbial biomass (using phospholipid fatty acid analysis (PLFA)). AMF significantly enhanced litter decomposition in the first 5 months, whilst delaying the development of total microbial biomass (represented by total PLFA) from T150 to T180. A significant decline in soil available N was observed through the course of the experiment for both treatments. This study shows that AMF have the capacity to interact with soil microbial communities and inhibit the development of fungal and bacterial groups in the soil at the later stage of the litter decomposition (180 days), whilst enhancing the rates of decomposition. PMID:28176855
Liu, Zhichao; Wu, Qiong; Zhu, Weihua; Xiao, Heming
2015-04-28
Density functional theory with dispersion-correction (DFT-D) was employed to study the effects of vacancy and pressure on the structure and initial decomposition of crystalline 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (β-NTO), a high-energy insensitive explosive. A comparative analysis of the chemical behaviors of NTO in the ideal bulk crystal and vacancy-containing crystals under applied hydrostatic compression was considered. Our calculated formation energy, vacancy interaction energy, electron density difference, and frontier orbitals reveal that the stability of NTO can be effectively manipulated by changing the molecular environment. Bimolecular hydrogen transfer is suggested to be a potential initial chemical reaction in the vacancy-containing NTO solid at 50 GPa, which is prior to the C-NO2 bond dissociation as its initiation decomposition in the gas phase. The vacancy defects introduced into the ideal bulk NTO crystal can produce a localized site, where the initiation decomposition is preferentially accelerated and then promotes further decompositions. Our results may shed some light on the influence of the molecular environments on the initial pathways in molecular explosives.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
Energy decomposition analysis for exciplexes using absolutely localized molecular orbitals
NASA Astrophysics Data System (ADS)
Ge, Qinghui; Mao, Yuezhi; Head-Gordon, Martin
2018-02-01
An energy decomposition analysis (EDA) scheme is developed for understanding the intermolecular interaction involving molecules in their excited states. The EDA utilizes absolutely localized molecular orbitals to define intermediate states and is compatible with excited state methods based on linear response theory such as configuration interaction singles and time-dependent density functional theory. The shift in excitation energy when an excited molecule interacts with the environment is decomposed into frozen, polarization, and charge transfer contributions, and the frozen term can be further separated into Pauli repulsion and electrostatics. These terms can be added to their counterparts obtained from the ground state EDA to form a decomposition of the total interaction energy. The EDA scheme is applied to study a variety of systems, including some model systems to demonstrate the correct behavior of all the proposed energy components as well as more realistic systems such as hydrogen-bonding complexes (e.g., formamide-water, pyridine/pyrimidine-water) and halide (F-, Cl-)-water clusters that involve charge-transfer-to-solvent excitations.
Theoretical investigation of HNgNH{sub 3}{sup +} ions (Ng = He, Ne, Ar, Kr, and Xe)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kunqi; Sheng, Li, E-mail: shengli@hit.edu.cn
2015-04-14
The equilibrium geometries, harmonic frequencies, and dissociation energies of HNgNH{sub 3}{sup +} ions (Ng = He, Ne, Ar, Kr, and Xe) were investigated using the following method: Becke-3-parameter-Lee-Yang-Parr (B3LYP), Boese-Matrin for Kinetics (BMK), second-order Møller-Plesset perturbation theory (MP2), and coupled-cluster with single and double excitations as well as perturbative inclusion of triples (CCSD(T)). The results indicate that HHeNH{sub 3}{sup +}, HArNH{sub 3}{sup +}, HKrNH{sub 3}{sup +}, and HXeNH{sub 3}{sup +} ions are metastable species that are protected from decomposition by high energy barriers, whereas the HNeNH{sub 3}{sup +} ion is unstable because of its relatively small energy barrier for decomposition.more » The bonding nature of noble-gas atoms in HNgNH{sub 3}{sup +} was also analyzed using the atoms in molecules approach, natural energy decomposition analysis, and natural bond orbital analysis.« less
Palm vein recognition based on directional empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
An intelligent decomposition approach for efficient design of non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1992-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.
Yang, Xuewei; Ma, Fuying; Yu, Hongbo; Zhang, Xiaoyu; Chen, Shulin
2011-02-01
The thermal decomposition of biopretreated corn stover during the low temperature has been studied by using the Py-GC/MS analysis and thermogravimetric analysis with the distributed activation energy model (DAEM). Results showed that biopretreatment with white-rot fungus Echinodontium taxodii 2538 can improve the low-temperature pyrolysis of biomass, by increasing the pyrolysis products of cellulose, hemicellulose (furfural and sucrose increased up to 4.68-fold and 2.94-fold respectively) and lignin (biophenyl and 3,7,11,15-tetramethyl-2-hexadecen-1-ol increased 2.45-fold and 4.22-fold, respectively). Calculated by DAEM method, it showed that biopretreatment can decrease the activation energy during the low temperature range, accelerate the reaction rate and start the thermal decomposition with lower temperature. ATR-FTIR results showed that the deconstruction of lignin and the decomposition of the main linkages between hemicellulose and lignin could contribute to the improvement of the pyrolysis at low temperature. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alias, R.; Hamid, N. H.; Jaapar, J.; Musa, M.; Alwi, H.; Halim, K. H. Ku
2018-03-01
Thermal behavior and decomposition kinetics of shredded oil palm empty fruit bunches (SOPEFB) were investigated in this study by using thermogravimetric analysis (TGA). The SOPEFB were analyzed under conditions of temperature 30 °C to 900 °C with nitrogen gas flow at 50 ml/min. The SOPEFB were embedded with cobalt (II) nitrate solution with concentration 5%, 10%, 15% and 20%. The TG/DTG curves shows the degradation behavior of SOPEFB following with char production for each heating rate and each concentration of cobalt catalyst. Thermal degradation occurred in three phases, water drying phase, decomposition of hemicellulose and cellulose phase, and lignin decomposition phase. The kinetic equation with relevant parameters described the activation energy required for thermal degradation at the temperature regions of 200 °C to 350 °C. Activation energy (E) for different heating rate with SOPEFB embedded with different concentration of cobalt catalyst showing that the lowest E required was at SOPEFB with 20% concentration of cobalt catalyst..
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Sun, Hongyan; Law, Chung K
2007-05-17
The reaction kinetics for the thermal decomposition of monomethylhydrazine (MMH) was studied with quantum Rice-Ramsperger-Kassel (QRRK) theory and a master equation analysis for pressure falloff. Thermochemical properties were determined by ab initio and density functional calculations. The entropies, S degrees (298.15 K), and heat capacities, Cp degrees (T) (0 < or = T/K < or = 1500), from vibrational, translational, and external rotational contributions were calculated using statistical mechanics based on the vibrational frequencies and structures obtained from the density functional study. Potential barriers for internal rotations were calculated at the B3LYP/6-311G(d,p) level, and hindered rotational contributions to S degrees (298.15 K) and Cp degrees (T) were calculated by solving the Schrödinger equation with free rotor wave functions, and the partition coefficients were treated by direct integration over energy levels of the internal rotation potentials. Enthalpies of formation, DeltafH degrees (298.15 K), for the parent MMH (CH3NHNH2) and its corresponding radicals CH3N*NH2, CH3NHN*H, and C*H2NHNH2 were determined to be 21.6, 48.5, 51.1, and 62.8 kcal mol(-1) by use of isodesmic reaction analysis and various ab initio methods. The kinetic analysis of the thermal decomposition, abstraction, and substitution reactions of MMH was performed at the CBS-QB3 level, with those of N-N and C-N bond scissions determined by high level CCSD(T)/6-311++G(3df,2p)//MPWB1K/6-31+G(d,p) calculations. Rate constants of thermally activated MMH to dissociation products were calculated as functions of pressure and temperature. An elementary reaction mechanism based on the calculated rate constants, thermochemical properties, and literature data was developed to model the experimental data on the overall MMH thermal decomposition rate. The reactions of N-N and C-N bond scission were found to be the major reaction paths for the modeling of MMH homogeneous decomposition at atmospheric conditions.
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
NASA Astrophysics Data System (ADS)
Zahedi, Ehsan; Hojamberdiev, Mirabbos
2017-08-01
The crystal structures, electro-optical properties, and charge carrier effective masses of Cs2TeW3O12 and Cs2TeMo3O12 with hexagonal, polar and non-centrosymmetric crystal structure were investigated based on density functional theory. Cs2TeW3O12 and Cs2TeMo3O12 are found to be indirect K (1/3, 1/3, 0) → G (0, 0, 0) band gap semiconductors (Eg > 3 eV) with small effective masses of photogenerated charge carriers. The mixing of octahedrally coordinated d° transition metal cations (W6+ and Mo6+) with the filled p orbitals of the oxygen ligands leads to the formation of some W5+/Mo5+ sites and splitting of d orbitals into the partially filled t2g (dxy, dyz, and dzx) orbitals and empty eg (dz2 and dx2-y2) orbitals. The top of the valence bond is mainly contributed by O 2p orbital of the oxygen ligands mixed with the partially filled t2g orbitals of W 5d/Mo 4d, while the conduction band mainly consists of empty eg orbitals of W 5d/Mo 4d with a little contribution of O 2p orbitals. The dielectric function exhibits a slight anisotropic behavior and optical absorption peak for Cs2TeW3O12 and Cs2TeMo3O12 belonging to the strong electronic transition O 2p → W 5d/Mo 4d within the octahedral units. According to the estimated valence band and conduction band edges, Cs2TeW3O12 and Cs2TeMo3O12 can be applied as visible-light-responsive photocatalysts for the decomposition of organic pollutants and dye molecules. Also, Cs2TeMo3O12 can be used in water splitting for hydrogen generation but Cs2TeW3O12 requires further experimental studies to confirm its ability for water splitting.
NASA Astrophysics Data System (ADS)
Granovskii, Mikhail; Dincer, Ibrahim; Rosen, Marc A.; Pioro, Igor
Increases in the power generation efficiency of nuclear power plants (NPPs) are mainly limited by the permissible temperatures in nuclear reactors and the corresponding temperatures and pressures of the coolants in reactors. Coolant parameters are limited by the corrosion rates of materials and nuclear-reactor safety constraints. The advanced construction materials for the next generation of CANDU reactors, which employ supercritical water (SCW) as a coolant and heat carrier, permit improved “steam” parameters (outlet temperatures up to 625°C and pressures of about 25 MPa). An increase in the temperature of steam allows it to be utilized in thermochemical water splitting cycles to produce hydrogen. These methods are considered by many to be among the most efficient ways to produce hydrogen from water and to have advantages over traditional low-temperature water electrolysis. However, even lower temperature water splitting cycles (Cu-Cl, UT-3, etc.) require an intensive heat supply at temperatures higher than 550-600°C. A sufficient increase in the heat transfer from the nuclear reactor to a thermochemical water splitting cycle, without jeopardizing nuclear reactor safety, might be effectively achieved by application of a heat pump, which increases the temperature of the heat supplied by virtue of a cyclic process driven by mechanical or electrical work. Here, a high-temperature chemical heat pump, which employs the reversible catalytic methane conversion reaction, is proposed. The reaction shift from exothermic to endothermic and back is achieved by a change of the steam concentration in the reaction mixture. This heat pump, coupled with the second steam cycle of a SCW nuclear power generation plant on one side and a thermochemical water splitting cycle on the other, increases the temperature of the “nuclear” heat and, consequently, the intensity of heat transfer into the water splitting cycle. A comparative preliminary thermodynamic analysis is conducted of the combined system comprising a SCW nuclear power generation plant and a chemical heat pump, which provides high-temperature heat to a thermochemical water splitting cycle for hydrogen production. It is concluded that the proposed chemical heat pump permits the utilization efficiency of nuclear energy to be improved by at least 2% without jeopardizing nuclear reactor safety. Based on this analysis, further research appears to be merited on the proposed advanced design of a nuclear power generation plant combined with a chemical heat pump, and implementation in appropriate applications seems worthwhile.
Rodrigues, Anderson Messias; de Melo Teixeira, Marcus; de Hoog, G Sybren; Schubach, Tânia Maria Pacheco; Pereira, Sandro Antonio; Fernandes, Geisa Ferreira; Bezerra, Leila Maria Lopes; Felipe, Maria Sueli; de Camargo, Zoilo Pires
2013-01-01
Sporothrix schenckii, previously assumed to be the sole agent of human and animal sporotrichosis, is in fact a species complex. Recently recognized taxa include S. brasiliensis, S. globosa, S. mexicana, and S. luriei, in addition to S. schenckii sensu stricto. Over the last decades, large epidemics of sporotrichosis occurred in Brazil due to zoonotic transmission, and cats were pointed out as key susceptible hosts. In order to understand the eco-epidemiology of feline sporotrichosis and its role in human sporotrichosis a survey was conducted among symptomatic cats. Prevalence and phylogenetic relationships among feline Sporothrix species were investigated by reconstructing their phylogenetic origin using the calmodulin (CAL) and the translation elongation factor-1 alpha (EF1α) loci in strains originated from Rio de Janeiro (RJ, n = 15), Rio Grande do Sul (RS, n = 10), Paraná (PR, n = 4), São Paulo (SP, n =3) and Minas Gerais (MG, n = 1). Our results showed that S. brasiliensis is highly prevalent among cats (96.9%) with sporotrichosis, while S. schenckii was identified only once. The genotype of Sporothrix from cats was found identical to S. brasiliensis from human sources confirming that the disease is transmitted by cats. Sporothrix brasiliensis presented low genetic diversity compared to its sister taxon S. schenckii. No evidence of recombination in S. brasiliensis was found by split decomposition or PHI-test analysis, suggesting that S. brasiliensis is a clonal species. Strains recovered in states SP, MG and PR share the genotype of the RJ outbreak, different from the RS clone. The occurrence of separate genotypes among strains indicated that the Brazilian S. brasiliensis epidemic has at least two distinct sources. We suggest that cats represent a major host and the main source of cat and human S. brasiliensis infections in Brazil.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.