Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Ukeiley, L.; Varghese, M.; Glauser, M.; Valentine, D.
1991-01-01
A 'lobed mixer' device that enhances mixing through secondary flows and streamwise vorticity is presently studied within the framework of multifractal-measures theory, in order to deepen understanding of velocity time trace data gathered on its operation. Proper orthogonal decomposition-based knowledge of coherent structures has been applied to obtain the generalized fractal dimensions and multifractal spectrum of several proper eigenmodes for data samples of the velocity time traces; this constitutes a marked departure from previous multifractal theory applications to self-similar cascades. In certain cases, a single dimension may suffice to capture the entire spectrum of scaling exponents for the velocity time trace.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
Kim, Il Kwang; Lee, Soo Il
2016-05-01
The modal decomposition of tapping mode atomic force microscopy microcantilevers in liquid environments was studied experimentally. Microcantilevers with different lengths and stiffnesses and two sample surfaces with different elastic moduli were used in the experiment. The response modes of the microcantilevers were extracted as proper orthogonal modes through proper orthogonal decomposition. Smooth orthogonal decomposition was used to estimate the resonance frequency directly. The effects of the tapping setpoint and the elastic modulus of the sample under test were examined in terms of their multi-mode responses with proper orthogonal modes, proper orthogonal values, smooth orthogonal modes and smooth orthogonal values. Regardless of the stiffness of the microcantilever under test, the first mode was dominant in tapping mode atomic force microscopy under normal operating conditions. However, at lower tapping setpoints, the flexible microcantilever showed modal distortion and noise near the tip when tapping on a hard sample. The stiff microcantilever had a higher mode effect on a soft sample at lower tapping setpoints. Modal decomposition for tapping mode atomic force microscopy can thus be used to estimate the characteristics of samples in liquid environments.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
NASA Astrophysics Data System (ADS)
Sancarlos-González, Abel; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Sapena-Bano, Angel; Riera-Guasp, Martin; Martinez-Roman, Javier; Perez-Cruz, Juan; Roger-Folch, Jose
2017-12-01
AC lines of industrial busbar systems are usually built using conductors with rectangular cross sections, where each phase can have several parallel conductors to carry high currents. The current density in a rectangular conductor, under sinusoidal conditions, is not uniform. It depends on the frequency, on the conductor shape, and on the distance between conductors, due to the skin effect and to proximity effects. Contrary to circular conductors, there are not closed analytical formulas for obtaining the frequency-dependent impedance of conductors with rectangular cross-section. It is necessary to resort to numerical simulations to obtain the resistance and the inductance of the phases, one for each desired frequency and also for each distance between the phases' conductors. On the contrary, the use of the parametric proper generalized decomposition (PGD) allows to obtain the frequency-dependent impedance of an AC line for a wide range of frequencies and distances between the phases' conductors by solving a single simulation in a 4D domain (spatial coordinates x and y, the frequency and the separation between conductors). In this way, a general "virtual chart" solution is obtained, which contains the solution for any frequency and for any separation of the conductors, and stores it in a compact separated representations form, which can be easily embedded on a more general software for the design of electrical installations. The approach presented in this work for rectangular conductors can be easily extended to conductors with an arbitrary shape.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
Differential Decomposition of Bacterial and Viral Fecal Indicators in Common Human Pollution Types
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water qualitymanagement practices, as well as predicting associated public health risks. Here, thedecomposition of select cultiva...
Blumthaler, Ingrid; Oberst, Ulrich
2012-03-01
Control design belongs to the most important and difficult tasks of control engineering and has therefore been treated by many prominent researchers and in many textbooks, the systems being generally described by their transfer matrices or by Rosenbrock equations and more recently also as behaviors. Our approach to controller design uses, in addition to the ideas of our predecessors on coprime factorizations of transfer matrices and on the parametrization of stabilizing compensators, a new mathematical technique which enables simpler design and also new theorems in spite of the many outstanding results of the literature: (1) We use an injective cogenerator signal module ℱ over the polynomial algebra [Formula: see text] (F an infinite field), a saturated multiplicatively closed set T of stable polynomials and its quotient ring [Formula: see text] of stable rational functions. This enables the simultaneous treatment of continuous and discrete systems and of all notions of stability, called T-stability. We investigate stabilizing control design by output feedback of input/output (IO) behaviors and study the full feedback IO behavior, especially its autonomous part and not only its transfer matrix. (2) The new technique is characterized by the permanent application of the injective cogenerator quotient signal module [Formula: see text] and of quotient behaviors [Formula: see text] of [Formula: see text]-behaviors B. (3) For the control tasks of tracking, disturbance rejection, model matching, and decoupling and not necessarily proper plants we derive necessary and sufficient conditions for the existence of proper stabilizing compensators with proper and stable closed loop behaviors, parametrize all such compensators as IO behaviors and not only their transfer matrices and give new algorithms for their construction. Moreover we solve the problem of pole placement or spectral assignability for the complete feedback behavior. The properness of the full feedback behavior ensures the absence of impulsive solutions in the continuous case, and that of the compensator enables its realization by Kalman state space equations or elementary building blocks. We note that every behavior admits an IO decomposition with proper transfer matrix, but that most of these decompositions do not have this property, and therefore we do not assume the properness of the plant. (4) The new technique can also be applied to more general control interconnections according to Willems, in particular to two-parameter feedback compensators and to the recent tracking framework of Fiaz/Takaba/Trentelman. In contrast to these authors, however, we pay special attention to the properness of all constructed transfer matrices which requires more subtle algorithms.
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
Observations on the Proper Orthogonal Decomposition
NASA Technical Reports Server (NTRS)
Berkooz, Gal
1992-01-01
The Proper Orthogonal Decomposition (P.O.D.), also known as the Karhunen-Loeve expansion, is a procedure for decomposing a stochastic field in an L(2) optimal sense. It is used in diverse disciplines from image processing to turbulence. Recently the P.O.D. is receiving much attention as a tool for studying dynamics of systems in infinite dimensional space. This paper reviews the mathematical fundamentals of this theory. Also included are results on the span of the eigenfunction basis, a geometric corollary due to Chebyshev's inequality and a relation between the P.O.D. symmetry and ergodicity.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc
A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less
NASA Astrophysics Data System (ADS)
Chong, Song-Ho; Ham, Sihyun
2011-07-01
We report the development of an atomic decomposition method of the protein solvation free energy in water, which ascribes global change in the solvation free energy to local changes in protein conformation as well as in hydration structure. So far, empirical decomposition analyses based on simple continuum solvation models have prevailed in the study of protein-protein interactions, protein-ligand interactions, as well as in developing scoring functions for computer-aided drug design. However, the use of continuum solvation model suffers serious drawbacks since it yields the protein free energy landscape which is quite different from that of the explicit solvent model and since it does not properly account for the non-polar hydrophobic effects which play a crucial role in biological processes in water. Herein, we develop an exact and general decomposition method of the solvation free energy that overcomes these hindrances. We then apply this method to elucidate the molecular origin for the solvation free energy change upon the conformational transitions of 42-residue amyloid-beta protein (Aβ42) in water, whose aggregation has been implicated as a primary cause of Alzheimer's disease. We address why Aβ42 protein exhibits a great propensity to aggregate when transferred from organic phase to aqueous phase.
Galerkin Method for Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead
A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riess, R.
Chosen for this description of the selected Kraftwerk Union (KWU) pressurized water reactor units were Obrigheim (KWO, 345 MW(e)), Stade (KKS, 662 (MW(e)), Borselle (KCB, 477 MW(e)), and Biblis (KWB-A, 1204 MW(e)). The experience at these plants shows that with a special startup procedure and a proper chemical control of the primary heat transport system that influences general corrosion, selective types of corrosion, corrosion product activity transport and resulting contamination, and radiation-induced decomposition, KWU units have no basic problems.
Emergent causality and the N-photon scattering matrix in waveguide QED
NASA Astrophysics Data System (ADS)
Sánchez-Burillo, E.; Cadarso, A.; Martín-Moreno, L.; García-Ripoll, J. J.; Zueco, D.
2018-01-01
In this work we discuss the emergence of approximate causality in a general setup from waveguide QED—i.e. a one-dimensional propagating field interacting with a scatterer. We prove that this emergent causality translates into a structure for the N-photon scattering matrix. Our work builds on the derivation of a Lieb-Robinson-type bound for continuous models and for all coupling strengths, as well as on several intermediate results, of which we highlight: (i) the asymptotic independence of space-like separated wave packets, (ii) the proper definition of input and output scattering states, and (iii) the characterization of the ground state and correlations in the model. We illustrate our formal results by analyzing the two-photon scattering from a quantum impurity in the ultrastrong coupling regime, verifying the cluster decomposition and ground-state nature. Besides, we generalize the cluster decomposition if inelastic or Raman scattering occurs, finding the structure of the S-matrix in momentum space for linear dispersion relations. In this case, we compute the decay of the fluorescence (photon-photon correlations) caused by this S-matrix.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
NASA Astrophysics Data System (ADS)
Vu, Trung-Thanh; Guibert, Philippe
2012-06-01
This paper aims to investigate cycle-to-cycle variations of non-reacting flow inside a motored single-cylinder transparent engine in order to judge the insertion amplitude of a control device able to displace linearly inside the inlet pipe. Three positions corresponding to three insertion amplitudes are implemented to modify the main aerodynamic properties from one cycle to the next. Numerous particle image velocimetry (PIV) two-dimensional velocity fields following cycle database are post-treated to discriminate specific contributions of the fluctuating flow. We performed a multiple snapshot proper orthogonal decomposition (POD) in the tumble plane of a pent roof SI engine. The analytical process consists of a triple decomposition for each instantaneous velocity field into three distinctive parts named mean part, coherent part and turbulent part. The 3rd- and 4th-centered statistical moments of the proper orthogonal decomposition (POD)-filtered velocity field as well as the probability density function of the PIV realizations proved that the POD extracts different behaviors of the flow. Especially, the cyclic variability is assumed to be contained essentially in the coherent part. Thus, the cycle-to-cycle variations of the engine flows might be provided from the corresponding POD temporal coefficients. It has been shown that the in-cylinder aerodynamic dispersions can be adapted and monitored by controlling the insertion depth of the control instrument inside the inlet pipe.
Simulation of the microwave heating of a thin multilayered composite material: A parameter analysis
NASA Astrophysics Data System (ADS)
Tertrais, Hermine; Barasinski, Anaïs; Chinesta, Francisco
2018-05-01
Microwave (MW) technology relies on volumetric heating. Thermal energy is transferred to the material that can absorb it at specific frequencies. The complex physics involved in this process is far from being understood and that is why a simulation tool has been developed in order to solve the electromagnetic and thermal equations in such a complex material as a multilayered composite part. The code is based on the in-plane-out-of-plane separated representation within the Proper Generalized Decomposition framework. To improve the knowledge on the process, a parameter study in carried out in this paper.
Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George
2009-11-01
High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Fast model updating coupling Bayesian inference and PGD model reduction
NASA Astrophysics Data System (ADS)
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
NASA Astrophysics Data System (ADS)
Peng, Di; Wang, Shaofei; Liu, Yingzheng
2016-04-01
Fast pressure-sensitive paint (PSP) is very useful in flow diagnostics due to its fast response and high spatial resolution, but its applications in low-speed flows are usually challenging due to limitations of paint's pressure sensitivity and the capability of high-speed imagers. The poor signal-to-noise ratio in low-speed cases makes it very difficult to extract useful information from the PSP data. In this study, unsteady PSP measurements were made on a flat plate behind a cylinder in a low-speed wind tunnel (flow speed from 10 to 17 m/s). Pressure fluctuations (Δ P) on the plate caused by vortex-plate interaction were recorded continuously by fast PSP (using a high-speed camera) and a microphone array. Power spectrum of pressure fluctuations and phase-averaged Δ P obtained from PSP and microphone were compared, showing good agreement in general. Proper orthogonal decomposition (POD) was used to reduce noise in PSP data and extract the dominant pressure features. The PSP results reconstructed from selected POD modes were then compared to the pressure data obtained simultaneously with microphone sensors. Based on the comparison of both instantaneous Δ P and root-mean-square of Δ P, it was confirmed that POD analysis could effectively remove noise while preserving the instantaneous pressure information with good fidelity, especially for flows with strong periodicity. This technique extends the application range of fast PSP and can be a powerful tool for fundamental fluid mechanics research at low speed.
Proper Orthogonal Decomposition on Experimental Multi-phase Flow in a Pipe
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; Tutkun, Murat; Cal, Raúl Bayoán
2016-11-01
Multi-phase flow in a 10 cm diameter pipe is analyzed using proper orthogonal decomposition. The data were obtained using X-ray computed tomography in the Well Flow Loop at the Institute for Energy Technology in Kjeller, Norway. The system consists of two sources and two detectors; one camera records the vertical beams and one camera records the horizontal beams. The X-ray system allows measurement of phase holdup, cross-sectional phase distributions and gas-liquid interface characteristics within the pipe. The mathematical framework in the context of multi-phase flows is developed. Phase fractions of a two-phase (gas-liquid) flow are analyzed and a reduced order description of the flow is generated. Experimental data deepens the complexity of the analysis with limited known quantities for reconstruction. Comparison between the reconstructed fields and the full data set allows observation of the important features. The mathematical description obtained from the decomposition will deepen the understanding of multi-phase flow characteristics and is applicable to fluidized beds, hydroelectric power and nuclear processes to name a few.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Yanfei
2018-04-01
We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.
Model reconstruction using POD method for gray-box fault detection
NASA Technical Reports Server (NTRS)
Park, H. G.; Zak, M.
2003-01-01
This paper describes using Proper Orthogonal Decomposition (POD) method to create low-order dynamical models for the Model Filter component of Beacon-based Exception Analysis for Multi-missions (BEAM).
Niegowski, Maciej; Zivanovic, Miroslav
2016-03-01
We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
On the hadron mass decomposition
NASA Astrophysics Data System (ADS)
Lorcé, Cédric
2018-02-01
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
Generalized decompositions of dynamic systems and vector Lyapunov functions
NASA Astrophysics Data System (ADS)
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
Pueyo Bellafont, Noèlia; Bagus, Paul S; Illas, Francesc
2015-06-07
A systematic study of the N(1s) core level binding energies (BE's) in a broad series of molecules is presented employing Hartree-Fock (HF) and the B3LYP, PBE0, and LC-BPBE density functional theory (DFT) based methods with a near HF basis set. The results show that all these methods give reasonably accurate BE's with B3LYP being slightly better than HF but with both PBE0 and LCBPBE being poorer than HF. A rigorous and general decomposition of core level binding energy values into initial and final state contributions to the BE's is proposed that can be used within either HF or DFT methods. The results show that Koopmans' theorem does not hold for the Kohn-Sham eigenvalues. Consequently, Kohn-Sham orbital energies of core orbitals do not provide estimates of the initial state contribution to core level BE's; hence, they cannot be used to decompose initial and final state contributions to BE's. However, when the initial state contribution to DFT BE's is properly defined, the decompositions of initial and final state contributions given by DFT, with several different functionals, are very similar to those obtained with HF. Furthermore, it is shown that the differences of Kohn-Sham orbital energies taken with respect to a common reference do follow the trend of the properly calculated initial state contributions. These conclusions are especially important for condensed phase systems where our results validate the use of band structure calculations to determine initial state contributions to BE shifts.
Hemodynamics of a Patient-Specific Aneurysm Model with Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Modarres-Sadeghi, Yahya
2017-11-01
Wall shear stress (WSS) and oscillatory shear index (OSI) are two of the most-widely studied hemodynamic quantities in cardiovascular systems that have been shown to have the ability to elicit biological responses of the arterial wall, which could be used to predict the aneurysm development and rupture. In this study, a reduced-order model (ROM) of the hemodynamics of a patient-specific cerebral aneurysm is studied. The snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases of the flow using a CFD training set with known inflow parameters. It was shown that the area of low WSS and high OSI is correlated to higher POD modes. The resulting ROM can reproduce both WSS and OSI computationally for future parametric studies with significantly less computational cost. Agreement was observed between the WSS and OSI values obtained using direct CFD results and ROM results.
High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition
NASA Astrophysics Data System (ADS)
Liu, Yingzheng; He, Chuangxin
2016-11-01
In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.
Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
Decomposition of Multi-player Games
NASA Astrophysics Data System (ADS)
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
Appropriate IMFs associated with cepstrum and envelope analysis for ball-bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Tsao, Wen-Chang; Pan, Min-Chun
2014-03-01
The traditional envelope analysis is an effective method for the fault detection of rolling bearings. However, all the resonant frequency bands must be examined during the bearing-fault detection process. To handle the above deficiency, this paper proposes using the empirical mode decomposition (EMD) to select a proper intrinsic mode function (IMF) for the subsequent detection tools; here both envelope analysis and cepstrum analysis are employed and compared. By virtue of the band-pass filtering nature of EMD, the resonant frequency bands of structure to be measured are captured in the IMFs. As impulses arising from rolling elements striking bearing faults modulate with structure resonance, proper IMFs potentially enable to characterize fault signatures. In the study, faulty ball bearings are used to justify the proposed method, and comparisons with the traditional envelope analysis are made. Post the use of IMFs highlighting faultybearing features, the performance of using envelope analysis and cepstrum analysis to single out bearing faults is objectively compared and addressed; it is noted that generally envelope analysis offers better performance.
Investigation on an ammonia supply system for flue gas denitrification of low-speed marine diesel
Yuan, Han; Zhao, Jian; Mei, Ning
2017-01-01
Low-speed marine diesel flue gas denitrification is in great demand in the ship transport industry. This research proposes an ammonia supply system which can be used for flue gas denitrification of low-speed marine diesel. In this proposed ammonia supply system, ammonium bicarbonate is selected as the ammonia carrier to produce ammonia and carbon dioxide by thermal decomposition. The diesel engine exhaust heat is used as the heating source for ammonium bicarbonate decomposition and ammonia gas desorption. As the ammonium bicarbonate decomposition is critical to the proper operation of this system, effects have been observed to reveal the performance of the thermal decomposition chamber in this paper. A visualization experiment for determination of the single-tube heat transfer coefficient and simulation of flow and heat transfer in two structures is conducted; the decomposition of ammonium bicarbonate is simulated by ASPEN PLUS. The results show that the single-tube heat transfer coefficient is 1052 W m2 °C−1; the thermal decomposition chamber fork-type structure gets a higher heat transfer compared with the row-type. With regard to the simulation of ammonium bicarbonate thermal decomposition, the ammonia production is significantly affected by the reaction temperature and the mass flow rate of the ammonium bicarbonate input. PMID:29308269
Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F
2016-03-01
Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.
Investigation on an ammonia supply system for flue gas denitrification of low-speed marine diesel
NASA Astrophysics Data System (ADS)
Huang, Xiankun; Yuan, Han; Zhao, Jian; Mei, Ning
2017-12-01
Low-speed marine diesel flue gas denitrification is in great demand in the ship transport industry. This research proposes an ammonia supply system which can be used for flue gas denitrification of low-speed marine diesel. In this proposed ammonia supply system, ammonium bicarbonate is selected as the ammonia carrier to produce ammonia and carbon dioxide by thermal decomposition. The diesel engine exhaust heat is used as the heating source for ammonium bicarbonate decomposition and ammonia gas desorption. As the ammonium bicarbonate decomposition is critical to the proper operation of this system, effects have been observed to reveal the performance of the thermal decomposition chamber in this paper. A visualization experiment for determination of the single-tube heat transfer coefficient and simulation of flow and heat transfer in two structures is conducted; the decomposition of ammonium bicarbonate is simulated by ASPEN PLUS. The results show that the single-tube heat transfer coefficient is 1052 W m2 °C-1; the thermal decomposition chamber fork-type structure gets a higher heat transfer compared with the row-type. With regard to the simulation of ammonium bicarbonate thermal decomposition, the ammonia production is significantly affected by the reaction temperature and the mass flow rate of the ammonium bicarbonate input.
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
Hydrazine decomposition and other reactions
NASA Technical Reports Server (NTRS)
Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)
1978-01-01
This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.
Differential Decay of Bacterial and Viral Fecal Indicators in Common Human Pollution Sources
Understanding the decomposition of different human fecal pollution sources is necessary for proper implementation of many water quality management practices, as well as predicting associated public health risks. Here, the decay of select cultivated and molecular indicators of fe...
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
NASA Technical Reports Server (NTRS)
Payne, Fred R.
1992-01-01
Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Integration of PGD-virtual charts into an engineering design process
NASA Astrophysics Data System (ADS)
Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic
2016-04-01
This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.
Extreme learning machine for reduced order modeling of turbulent geophysical flows.
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
Extreme learning machine for reduced order modeling of turbulent geophysical flows
NASA Astrophysics Data System (ADS)
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
Inviscid criterion for decomposing scales
NASA Astrophysics Data System (ADS)
Zhao, Dongxiao; Aluie, Hussein
2018-05-01
The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.
Use of Proper Orthogonal Decomposition Towards Time-resolved Image Analysis of Sprays
2011-03-15
High-speed movies of optically dense sprays exiting a Gas-Centered Swirl Coaxial (GCSC) injector are subjected to image analysis to determine spray...sequence prior to image analysis . Results of spray morphology including spray boundary, widths, angles and boundary oscillation frequencies, are
Modal decomposition of turbulent supersonic cavity
NASA Astrophysics Data System (ADS)
Soni, R. K.; Arya, N.; De, A.
2018-06-01
Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempka, S.N.; Strickland, J.H.; Glass, M.W.
1995-04-01
formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less
Considerations for Storage of High Test Hydrogen Peroxide (HTP) Utilizing Non-Metal Containers
NASA Technical Reports Server (NTRS)
Moore, Robin E.; Scott, Joseph P.; Wise, Harry
2005-01-01
When working with high concentrations of hydrogen peroxide, it is critical that the storage container be constructed of the proper materials, those which will not degrade to the extent that container breakdown or dangerous decomposition occurs. It has been suggested that the only materials that will safely contain the peroxide for a significant period of time are metals of stainless steel construction or aluminum use as High Test Hydrogen Peroxide (HTP) Containers. The stability and decomposition of HTP will be also discussed as well as various means suggested in the literature to minimize these problems. The dangers of excess oxygen generation are also touched upon.
Reactivity continuum modeling of leaf, root, and wood decomposition across biomes
NASA Astrophysics Data System (ADS)
Koehler, Birgit; Tranvik, Lars J.
2015-07-01
Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
NASA Astrophysics Data System (ADS)
Vignola, Joseph F.; Bucaro, Joseph A.; Tressler, James F.; Ellingston, Damon; Kurdila, Andrew J.; Adams, George; Marchetti, Barbara; Agnani, Alexia; Esposito, Enrico; Tomasini, Enrico P.
2004-06-01
A large-scale survey (~700 m2) of frescos and wall paintings was undertaken in the U.S. Capitol Building in Washington, D.C. to identify regions that may need structural repair due to detachment, delamination, or other defects. The survey encompassed eight pre-selected spaces including: Brumidi's first work at the Capitol building in the House Appropriations Committee room; the Parliamentarian's office; the House Speaker's office; the Senate Reception room; the President's Room; and three areas of the Brumidi Corridors. Roughly 60% of the area surveyed was domed or vaulted ceilings, the rest being walls. Approximately 250 scans were done ranging in size from 1 to 4 m2. The typical mesh density was 400 scan points per square meter. A common approach for post-processing time series called Proper Orthogonal Decomposition, or POD, was adapted to frequency-domain data in order to extract the essential features of the structure. We present a POD analysis for one of these panels, pinpointing regions that have experienced severe substructural degradation.
NASA Astrophysics Data System (ADS)
Placidi, M.; Ganapathisubramani, B.
2018-04-01
Wind-tunnel experiments were carried out on fully-rough boundary layers with large roughness (δ /h ≈ 10, where h is the height of the roughness elements and δ is the boundary-layer thickness). Twelve different surface conditions were created by using LEGO™ bricks of uniform height. Six cases are tested for a fixed plan solidity (λ _P) with variations in frontal density (λ _F), while the other six cases have varying λ _P for fixed λ _F. Particle image velocimetry and floating-element drag-balance measurements were performed. The current results complement those contained in Placidi and Ganapathisubramani (J Fluid Mech 782:541-566, 2015), extending the previous analysis to the turbulence statistics and spatial structure. Results indicate that mean velocity profiles in defect form agree with Townsend's similarity hypothesis with varying λ _F, however, the agreement is worse for cases with varying λ _P. The streamwise and wall-normal turbulent stresses, as well as the Reynolds shear stresses, show a lack of similarity across most examined cases. This suggests that the critical height of the roughness for which outer-layer similarity holds depends not only on the height of the roughness, but also on the local wall morphology. A new criterion based on shelter solidity, defined as the sheltered plan area per unit wall-parallel area, which is similar to the `effective shelter area' in Raupach and Shaw (Boundary-Layer Meteorol 22:79-90, 1982), is found to capture the departure of the turbulence statistics from outer-layer similarity. Despite this lack of similarity reported in the turbulence statistics, proper orthogonal decomposition analysis, as well as two-point spatial correlations, show that some form of universal flow structure is present, as all cases exhibit virtually identical proper orthogonal decomposition mode shapes and correlation fields. Finally, reduced models based on proper orthogonal decomposition reveal that the small scales of the turbulence play a significant role in assessing outer-layer similarity.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Unraveling the physical meaning of the Jaffe-Manohar decomposition of the nucleon spin
NASA Astrophysics Data System (ADS)
Wakamatsu, M.
2016-09-01
A general consensus now is that there are two physically inequivalent complete decompositions of the nucleon spin, i.e. the decomposition of the canonical type and that of mechanical type. The well-known Jaffe-Manohar decomposition is of the former type. Unfortunately, there is a wide-spread misbelief that this decomposition matches the partonic picture, which states that motion of quarks in the nucleon is approximately free. In the present monograph, we reveal that this understanding is not necessarily correct and that the Jaffe-Manohar decomposition is not such a decomposition, which natively reflects the intrinsic (or static) orbital angular momentum structure of the nucleon.
Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence
NASA Astrophysics Data System (ADS)
Hatch, David R.
This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
Mark E. Harmon; Whendee L. Silver; Becky Fasth; Hua Chen; Ingrid C. Burke; William J. Parton; Stephen C. Hart; William S. Currie; Ariel E. Lugo
2009-01-01
Decomposition is a critical process in global carbon cycling. During decomposition, leaf and fine root litter may undergo a later, relatively slow phase; past long-term experiments indicate this phase occurs, but whether it is a general phenomenon has not been examined. Data from Long-term Intersite Decomposition Experiment Team, representing 27 sites and nine litter...
Thermophysics Characterization of Kerosene Combustion
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2000-01-01
A one-formula surrogate fuel formulation and its quasi-global combustion kinetics model are developed to support the design of injectors and thrust chambers of kerosene-fueled rocket engines. This surrogate fuel model depicts a fuel blend that properly represents the general physical and chemical properties of kerosene. The accompanying gaseous-phase thermodynamics of the surrogate fuel is anchored with the heat of formation of kerosene and verified by comparing a series of one-dimensional rocket thrust chamber calculations. The quasi-global combustion kinetics model consists of several global steps for parent fuel decomposition, soot formation, and soot oxidation, and a detailed wet-CO mechanism. The final thermophysics formulations are incorporated with a computational fluid dynamics model for prediction of the combustor efficiency of an uni-element, tri-propellant combustor and the radiation of a kerosene-fueled thruster plume. The model predictions agreed reasonably well with those of the tests.
A multi-domain spectral method for time-fractional differential equations
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.
2015-07-01
This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.
Thermophysics Characterization of Kerosene Combustion
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2001-01-01
A one-formula surrogate fuel formulation and its quasi-global combustion kinetics model are developed to support the design of injectors and thrust chambers of kerosene-fueled rocket engines. This surrogate fuel model depicts a fuel blend that properly represents the general physical and chemical properties of kerosene. The accompanying gaseous-phase thermodynamics of the surrogate fuel is anchored with the heat of formation of kerosene and verified by comparing a series of one-dimensional rocket thrust chamber calculations. The quasi-global combustion kinetics model consists of several global steps for parent fuel decomposition, soot formation, and soot oxidation and a detailed wet-CO mechanism to complete the combustion process. The final thermophysics formulations are incorporated with a computational fluid dynamics model for prediction of the combustion efficiency of an unielement, tripropellant combustor and the radiation of a kerosene-fueled thruster plume. The model predictions agreed reasonably well with those of the tests.
Partition functions of thermally dissociating diatomic molecules and related momentum problem
NASA Astrophysics Data System (ADS)
Buchowiecki, Marcin
2017-11-01
The anharmonicity and ro-vibrational coupling in ro-vibrational partition functions of diatomic molecules are analyzed for the high temperatures of the thermal dissociation regime. The numerically exact partition functions and thermal energies are calculated. At the high temperatures the proper integration of momenta is important if the partition function of the molecule, understood as bounded system, is to be obtained. The problem of proper treatment of momentum is crucial for correctness of high temperature molecular simulations as the decomposition of simulated molecule have to be avoided; the analysis of O2, H2+, and NH3 molecules allows to show importance of βDe value.
Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas
For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.
NASA Astrophysics Data System (ADS)
Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.
2017-03-01
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
2012-09-01
on transformation field analysis [19], proper orthogonal decomposition [63], eigenstrains [23], and others [1, 29, 39] have brought significant...commercial finite element software (Abaqus) along with the user material subroutine utility ( UMAT ) is employed to solve these problems. In this section...Symmetric Coefficients TFA: Transformation Field Analysis UMAT : User Material Subroutine
Velocimetry system was then used to acquire flow field data across a series of three horizontal planes spanning from 0.25 to 1.5 times the ship hangar height...included six separate data points at gust-frequency referenced Strouhal numbers ranging from 0.430 to1.474. A 725-Hertz time -resolved Particle Image
Nonlinear dimensionality reduction of data lying on the multicluster manifold.
Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben
2008-08-01
A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.
Proper Orthogonal Decomposition in Optimal Control of Fluids
NASA Technical Reports Server (NTRS)
Ravindran, S. S.
1999-01-01
In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.
Characterization of Flow Dynamics and Reduced-Order Description of Experimental Two-Phase Pipe Flow
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; SkjæRaasen, Olaf; Tutkun, Murat; Cal, Raul Bayoan
2017-11-01
Multiphase pipe flow is investigated using proper orthogonal decomposition for tomographic X-ray data, where holdup, cross sectional phase distributions and phase interface characteristics are obtained. Instantaneous phase fractions of dispersed flow and slug flow are analyzed and a reduced order dynamical description is generated. The dispersed flow displays coherent structures in the first few modes near the horizontal center of the pipe, representing the liquid-liquid interface location while the slug flow case shows coherent structures that correspond to the cyclical formation and breakup of the slug in the first 10 modes. The reconstruction of the fields indicate that main features are observed in the low order dynamical descriptions utilizing less than 1 % of the full order model. POD temporal coefficients a1, a2 and a3 show interdependence for the slug flow case. The coefficients also describe the phase fraction holdup as a function of time for both dispersed and slug flow. These flows are highly applicable to petroleum transport pipelines, hydroelectric power and heat exchanger tubes to name a few. The mathematical representations obtained via proper orthogonal decomposition will deepen the understanding of fundamental multiphase flow characteristics.
Modeling of a pitching and plunging airfoil using experimental flow field and load measurements
NASA Astrophysics Data System (ADS)
Troshin, Victor; Seifert, Avraham
2018-01-01
The main goal of the current paper is to outline a low-order modeling procedure of a heaving airfoil in a still fluid using experimental measurements. Due to its relative simplicity, the proposed procedure is applicable for the analysis of flow fields within complex and unsteady geometries and it is suitable for analyzing the data obtained by experimentation. Currently, this procedure is used to model and predict the flow field evolution using a small number of low profile load sensors and flow field measurements. A time delay neural network is used to estimate the flow field. The neural network estimates the amplitudes of the most energetic modes using four sensory inputs. The modes are calculated using proper orthogonal decomposition of the flow field data obtained experimentally by time-resolved, phase-locked particle imaging velocimetry. To permit the use of proper orthogonal decomposition, the measured flow field is mapped onto a stationary domain using volume preserving transformation. The analysis performed by the model showed good estimation quality within the parameter range used in the training procedure. However, the performance deteriorates for cases out of this range. This situation indicates that, to improve the robustness of the model, both the decomposition and the training data sets must be diverse in terms of input parameter space. In addition, the results suggest that the property of volume preservation of the mapping does not affect the model quality as long as the model is not based on the Galerkin approximation. Thus, it may be relaxed for cases with more complex geometry and kinematics.
Thermal Decomposition of Nd3(+), Sr2(+) and Pb2(+) Exchanged Beta’’ Aluminas,
1987-07-01
reconstructive recrystallization process is responsible for the formation of the MP phase; this perhaps is a surprising result. The decomposition processes of Nd3... eutectics may be present. A general trend for all decompositions of metastable substituted " aluminas would therefore seem to be that when occurring
How to Compute the Partial Fraction Decomposition without Really Trying
ERIC Educational Resources Information Center
Brazier, Richard; Boman, Eugene
2007-01-01
For various reasons there has been a recent trend in college and high school calculus courses to de-emphasize teaching the Partial Fraction Decomposition (PFD) as an integration technique. This is regrettable because the Partial Fraction Decomposition is considerably more than an integration technique. It is, in fact, a general purpose tool which…
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
The nexus between geopolitical uncertainty and crude oil markets: An entropy-based wavelet analysis
NASA Astrophysics Data System (ADS)
Uddin, Gazi Salah; Bekiros, Stelios; Ahmed, Ali
2018-04-01
The global financial crisis and the subsequent geopolitical turbulence in energy markets have brought increased attention to the proper statistical modeling especially of the crude oil markets. In particular, we utilize a time-frequency decomposition approach based on wavelet analysis to explore the inherent dynamics and the casual interrelationships between various types of geopolitical, economic and financial uncertainty indices and oil markets. Via the introduction of a mixed discrete-continuous multiresolution analysis, we employ the entropic criterion for the selection of the optimal decomposition level of a MODWT as well as the continuous-time coherency and phase measures for the detection of business cycle (a)synchronization. Overall, a strong heterogeneity in the revealed interrelationships is detected over time and across scales.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Empirical dual energy calibration (EDEC) for cone-beam computed tomography.
Stenner, Philip; Berkus, Timo; Kachelriess, Marc
2007-09-01
Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter components. The empirical dual energy calibration technique is a pragmatic, simple, and reliable calibration approach that produces highly quantitative DECT images.
NASA Astrophysics Data System (ADS)
Filippova, Nina V.; Glagolev, Mikhail V.
2018-03-01
The method of standard litter (tea) decomposition was implemented to compare decomposition rate constants (k) between different peatland ecosystems and coniferous forests in the middle taiga zone of West Siberia (near Khanty-Mansiysk). The standard protocol of TeaComposition initiative was used to make the data usable for comparisons among different sites and zonobiomes worldwide. This article sums up the results of short-term decomposition (3 months) on the local scale. The values of decomposition rate constants differed significantly between three ecosystem types: it was higher in forest compared to bogs, and treed bogs had lower decomposition constant compared to Sphagnum lawns. In general, the decomposition rate constants were close to ones reported earlier for similar climatic conditions and habitats.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Shanableh, A
2005-01-01
The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.
More on boundary holographic Witten diagrams
NASA Astrophysics Data System (ADS)
Sato, Yoshiki
2018-01-01
In this paper we discuss geodesic Witten diagrams in general holographic conformal field theories with boundary or defect. In boundary or defect conformal field theory, two-point functions are nontrivial and can be decomposed into conformal blocks in two distinct ways; ambient channel decomposition and boundary channel decomposition. In our previous work [A. Karch and Y. Sato, J. High Energy Phys. 09 (2017) 121., 10.1007/JHEP09(2017)121] we only consider two-point functions of same operators. We generalize our previous work to a situation where operators in two-point functions are different. We obtain two distinct decomposition for two-point functions of different operators.
An optimized ensemble local mean decomposition method for fault detection of mechanical components
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Debnath, M; Santoni, C; Leonardi, S; Iungo, G V
2017-04-13
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Cyclic Mario worlds — color-decomposition for one-loop QCD
NASA Astrophysics Data System (ADS)
Kälin, Gregor
2018-04-01
We present a new color decomposition for QCD amplitudes at one-loop level as a generalization of the Del Duca-Dixon-Maltoni and Johansson-Ochirov decomposition at tree level. Starting from a minimal basis of planar primitive amplitudes we write down a color decomposition that is free of linear dependencies among appearing primitive amplitudes or color factors. The conjectured decomposition applies to any number of quark flavors and is independent of the choice of gauge group and matter representation. The results also hold for higher-dimensional or supersymmetric extensions of QCD. We provide expressions for any number of external quark-antiquark pairs and gluons. [Figure not available: see fulltext.
SNCR De-NOx within a moderate temperature range using urea-spiked hydrazine hydrate as reductant.
Chen, H; Chen, D Z; Fan, S; Hong, L; Wang, D
2016-10-01
In this research, urea-spiked hydrazine hydrate solutions are used as reductants for the Selective Non-Catalytic Reduction (SNCR) De-NOx process below 650 °C. The urea concentration in the urea/hydrazine hydrate solutions is chosen through experimental and theoretical studies. To determine the mechanism of the De-NOx process, thermogravimetric analysis (TGA) of the urea/hydrazine hydrate solutions and their thermal decomposition in air and nitrogen atmospheres were studied to understand their decomposition behaviours and redox characteristics. Then a plug flow reactor (PFR) model was adopted to simulate the De-NOx processes in a pilot scale tubular reactor, and the calculated De-NOx efficiency vs. temperature profiles were compared with experimental results to support the mechanism and choose the proper reductant and its reaction temperature. Both the experimental and calculated results show that when the urea is spiked into hydrazine hydrate solution to make the urea-N content approximately 16.7%-25% of the total N content in the solution, better De-NOx efficiencies can be obtained in the temperature range of 550-650 °C, under which NH3 is inactive in reducing NOx. And it is also proved that for these urea-spiked hydrazine hydrate solutions, the hydrazine decomposition through the pathway N2H4 + M = N2H3 + H + M is enhanced to provide radical H, which is active to reduce NO. Finally, the reaction routes for SNCR De-NOx process based on urea-spiked hydrazine hydrate at the proper temperature are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.
2010-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
NASA Astrophysics Data System (ADS)
Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2018-03-01
We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Lu, Zhipeng; Zeng, Qun; Xue, Xianggui; Zhang, Zengming; Nie, Fude; Zhang, Chaoyang
2017-08-30
Performances and behaviors under high temperature-high pressure conditions are fundamentals for many materials. We study in the present work the pressure effect on the thermal decomposition of a new energetic ionic salt (EIS), TKX-50, by confining samples in a diamond anvil cell, using Raman spectroscopy measurements and ab initio simulations. As a result, we find a quadratic increase in decomposition temperature (T d ) of TKX-50 with increasing pressure (P) (T d = 6.28P 2 + 12.94P + 493.33, T d and P in K and GPa, respectively, and R 2 = 0.995) and the decomposition under various pressures initiated by an intermolecular H-transfer reaction (a bimolecular reaction). Surprisingly, this finding is contrary to a general observation about the pressure effect on the decomposition of common energetic materials (EMs) composed of neutral molecules: increasing pressure will impede the decomposition if it starts from a bimolecular reaction. Our results also demonstrate that increasing pressure impedes the H-transfer via the enhanced long-range electrostatic repulsion of H +δ H +δ of neighboring NH 3 OH + , with blue shifts of the intermolecular H-bonds. And the subsequent decomposition of the H-transferred intermediates is also suppressed, because the decomposition proceeds from a bimolecular reaction to a unimolecular one, which is generally prevented by compression. These two factors are the basic root for which the decomposition retarded with increasing pressure of TKX-50. Therefore, our finding breaks through the previously proposed concept that, for the condensed materials, increasing pressure will accelerate the thermal decomposition initiated by bimolecular reactions, and reveals a distinct mechanism of the pressure effect on thermal decomposition. That is to say, increasing pressure does not always promote the condensed material decay initiated through bimolecular reactions. Moreover, such a mechanism may be feasible to other EISs due to the similar intermolecular interactions.
Reduced-order model for underwater target identification using proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ramesh, Sai Sudha; Lim, Kian Meng
2017-03-01
Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.
Algebraic approach to characterizing paraxial optical systems.
Wittig, K; Giesen, A; Hügel, H
1994-06-20
The paraxial propagation formalism for ABCD systems is reviewed and written in terms of quantum mechanics. This formalism shows that the propagation based on the Collins integral can be generalized so that, in addition, the problem of beam quality degradation that is due to aberrations can be treated in a natural way. Moreover, because this formalism is well elaborated and reduces the problem of propagation to simple algebraic calculations, it seems to be less complicated than other approaches. This can be demonstrated with an easy and unitary derivation of several results, which were obtained with different approaches, in each case matched to the specific problem. It is first shown how the canonical decomposition of arbitrary (also complex) ABCD matrices introduced by Siegman [Lasers, 2nd ed. (Oxford U. Press, London, 1986)] can be used to establish the group structure of geometric optics on the space of optical wave functions. This result is then used to derive the propagation law for arbitrary moments in eneral ABCD systems. Finally a proper generalization to nonparaxial propagation operators that allows us to treat arbitrary aberration effects with respect to their influence on beam quality degradation is presented.
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
Lumley decomposition of turbulent boundary layer at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Tutkun, Murat; George, William K.
2017-02-01
The decomposition proposed by Lumley in 1966 is applied to a high Reynolds number turbulent boundary layer. The experimental database was created by a hot-wire rake of 143 probes in the Laboratoire de Mécanique de Lille wind tunnel. The Reynolds numbers based on momentum thickness (Reθ) are 9800 and 19 100. Three-dimensional decomposition is performed, namely, proper orthogonal decomposition (POD) in the inhomogeneous and bounded wall-normal direction, Fourier decomposition in the homogeneous spanwise direction, and Fourier decomposition in time. The first POD modes in both cases carry nearly 50% of turbulence kinetic energy when the energy is integrated over Fourier dimensions. The eigenspectra always peak near zero frequency and most of the large scale, energy carrying features are found at the low end of the spectra. The spanwise Fourier mode which has the largest amount of energy is the first spanwise mode and its symmetrical pair. Pre-multiplied eigenspectra have only one distinct peak and it matches the secondary peak observed in the log-layer of pre-multiplied velocity spectra. Energy carrying modes obtained from the POD scale with outer scaling parameters. Full or partial reconstruction of turbulent velocity signal based only on energetic modes or non-energetic modes revealed the behaviour of urms in distinct regions across the boundary layer. When urms is based on energetic reconstruction, there exists (a) an exponential decay from near wall to log-layer, (b) a constant layer through the log-layer, and (c) another exponential decay in the outer region. The non-energetic reconstruction reveals that urms has (a) an exponential decay from the near-wall to the end of log-layer and (b) a constant layer in the outer region. Scaling of urms using the outer parameters is best when both energetic and non-energetic profiles are combined.
Ordering Design Tasks Based on Coupling Strengths
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Bloebaum, C. L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Ordering design tasks based on coupling strengths
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Bloebaum, Christina L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Luis; MartI, Jose M; Ibanez, Jose M
2010-05-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, andmore » can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.« less
Seasonal necrophagous insect community assembly during vertebrate carrion decomposition.
Benbow, M E; Lewis, A J; Tomberlin, J K; Pechal, J L
2013-03-01
Necrophagous invertebrates have been documented to be a predominant driver of vertebrate carrion decomposition; however, very little is understood about the assembly of these communities both within and among seasons. The objective of this study was to evaluate the seasonal differences in insect taxa composition, richness, and diversity on carrion over decomposition with the intention that such data will be useful for refining error estimates in forensic entomology. Sus scrofa (L.) carcasses (n = 3-6, depending on season) were placed in a forested habitat near Xenia, OH, during spring, summer, autumn, and winter. Taxon richness varied substantially among seasons but was generally lower (1-2 taxa) during early decomposition and increased (3-8 taxa) through intermediate stages of decomposition. Autumn and winter showed the highest richness during late decomposition. Overall, taxon richness was higher during active decay for all seasons. While invertebrate community composition was generally consistent among seasons, the relative abundance of five taxa significantly differed across seasons, demonstrating different source communities for colonization depending on the time of year. There were significantly distinct necrophagous insect communities for each stage of decomposition, and between summer and autumn and summer and winter, but the communities were similar between autumn and winter. Calliphoridae represented significant indicator taxa for summer and autumn but replaced by Coleoptera during winter. Here we demonstrated substantial variability in necrophagous communities and assembly on carrion over decomposition and among seasons. Recognizing this variation has important consequences for forensic entomology and future efforts to provide error rates for estimates of the postmortem interval using arthropod succession data as evidence during criminal investigations.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
Does oxygen exposure time control the extent of organic matter decomposition in peatlands?
NASA Astrophysics Data System (ADS)
Philben, Michael; Kaiser, Karl; Benner, Ronald
2014-05-01
The extent of peat decomposition was investigated in four cores collected along a latitudinal gradient from 56°N to 66°N in the West Siberian Lowland. The acid:aldehyde ratios of lignin phenols were significantly higher in the two northern cores compared with the two southern cores, indicating peats at the northern sites were more highly decomposed. Yields of hydroxyproline, an amino acid found in plant structural glycoproteins, were also significantly higher in northern cores compared with southern cores. Hydroxyproline-rich glycoproteins are not synthesized by microbes and are generally less reactive than bulk plant carbon, so elevated yields indicated that northern cores were more extensively decomposed than the southern cores. The southern cores experienced warmer temperatures, but were less decomposed, indicating that temperature was not the primary control of peat decomposition. The plant community oscillated between Sphagnum and vascular plant dominance in the southern cores, but vegetation type did not appear to affect the extent of decomposition. Oxygen exposure time appeared to be the strongest control of the extent of peat decomposition. The northern cores had lower accumulation rates and drier conditions, so these peats were exposed to oxic conditions for a longer time before burial in the catotelm, where anoxic conditions prevail and rates of decomposition are generally lower by an order of magnitude.
Decomposition rates and termite assemblage composition in semiarid Africa
Schuurman, G.
2005-01-01
Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
On the decomposition of a dynamical system into non-interacting subsystems.
NASA Technical Reports Server (NTRS)
Rosen, R.
1972-01-01
It is shown that, under rather general conditions, it is possible to formally decompose the dynamics of an n-dimensional dynamical system into a number of non-interacting subsystems. It is shown that these decompositions are in general not simply related to the kinds of observational procedures in terms of which the original state variables of the system are defined. Some consequences of this construction for reductionism in biology are discussed.
2010-08-31
Wall interaction of sprays emanating from Gas Centered Swirl Coaxial (GCSC) injectors were experimentally studied as a part of this ten-week project. A...American Society of Engineering Education (ASEE) Dated August 31st 2010 Abstract Wall interaction of sprays emanating from Gas Centered...Edwards Air Force Base (AFRL/EAFB) have documented atomization characteristics of a Gas -Centered Swirl Coaxial (GCSC) injector [1-2], in which the
Radar Measurements of Ocean Surface Waves using Proper Orthogonal Decomposition
2017-03-30
rely on use of Fourier transforms (FFT) and filtering spectra on the linear dispersion relationship for ocean surface waves. This report discusses...the measured signal (e.g., Young et al., 1985). In addition, the methods often rely on filtering the FFT of radar backscatter or Doppler velocities...to those obtained with conventional FFT and dispersion curve filtering techniques (iv) Compare both results of(iii) to ground truth sensors (i .e
Simulation of municipal solid waste degradation in aerobic and anaerobic bioreactor landfills.
Patil, Bhagwan Shamrao; C, Agnes Anto; Singh, Devendra Narain
2017-03-01
Municipal solid waste generation is huge in growing cities of developing nations such as India, owing to the rapid industrial and population growth. In addition to various methods for treatment and disposal of municipal solid waste (landfills, composting, bio-methanation, incineration and pyrolysis), aerobic/anaerobic bioreactor landfills are gaining popularity for economical and effective disposal of municipal solid waste. However, efficiency of municipal solid waste bioreactor landfills primarily depends on the municipal solid waste decomposition rate, which can be accelerated through monitoring moisture content and temperature by using the frequency domain reflectometry probe and thermocouples, respectively. The present study demonstrates that these landfill physical properties of the heterogeneous municipal solid waste mass can be monitored using these instruments, which facilitates proper scheduling of the leachate recirculation for accelerating the decomposition rate of municipal solid waste.
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
Particle image and acoustic Doppler velocimetry analysis of a cross-flow turbine wake
NASA Astrophysics Data System (ADS)
Strom, Benjamin; Brunton, Steven; Polagye, Brian
2017-11-01
Cross-flow turbines have advantageous properties for converting kinetic energy in wind and water currents to rotational mechanical energy and subsequently electrical power. A thorough understanding of cross-flow turbine wakes aids understanding of rotor flow physics, assists geometric array design, and informs control strategies for individual turbines in arrays. In this work, the wake physics of a scale model cross-flow turbine are investigated experimentally. Three-component velocity measurements are taken downstream of a two-bladed turbine in a recirculating water channel. Time-resolved stereoscopic particle image and acoustic Doppler velocimetry are compared for planes normal to and distributed along the turbine rotational axis. Wake features are described using proper orthogonal decomposition, dynamic mode decomposition, and the finite-time Lyapunov exponent. Consequences for downstream turbine placement are discussed in conjunction with two-turbine array experiments.
Primary decomposition of zero-dimensional ideals over finite fields
NASA Astrophysics Data System (ADS)
Gao, Shuhong; Wan, Daqing; Wang, Mingsheng
2009-03-01
A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Implementation of the force decomposition machine for molecular dynamics simulations.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2012-09-01
We present the design and implementation of the force decomposition machine (FDM), a cluster of personal computers (PCs) that is tailored to running molecular dynamics (MD) simulations using the distributed diagonal force decomposition (DDFD) parallelization method. The cluster interconnect architecture is optimized for the communication pattern of the DDFD method. Our implementation of the FDM relies on standard commodity components even for networking. Although the cluster is meant for DDFD MD simulations, it remains general enough for other parallel computations. An analysis of several MD simulation runs on both the FDM and a standard PC cluster demonstrates that the FDM's interconnect architecture provides a greater performance compared to a more general cluster interconnect. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Walling, Cheves; Partch, Richard E.; Weil, Tomas
1975-01-01
Added substrates, acetone and t-butyl alcohol, strongly retard the decomposition of H2O2 brought about by ferric ethylenediaminetetraacetate (EDTA) at pH 8-9.5. Their relative effectiveness and the kinetic form of the retardation are consistent with their interruption of a hydroxyl radical chain that is propagated by HO· attack both upon H2O2 and on complexed and uncomplexed EDTA. Similar retardation is observed with decompositions catalyzed by ferric nitrilotriacetate and hemin, and it is proposed that such redox chains may be quite a general path for transition metal ion catalysis of H2O2 decomposition. PMID:16592209
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
On the Composition of Risk Preference and Belief
ERIC Educational Resources Information Center
Wakkar, Peter P.
2004-01-01
Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…
LES of flow in the street canyon
NASA Astrophysics Data System (ADS)
Fuka, Vladimír; Brechler, Josef
2012-04-01
Results of computer simulation of flow over a series of street canyons are presented in this paper. The setup is adapted from an experimental study by [4] with two different shapes of buildings. The problem is simulated by an LES model CLMM (Charles University Large Eddy Microscale Model) and results are analysed using proper orthogonal decomposition and spectral analysis. The results in the channel (layout from the experiment) are compared with results with a free top boundary.
NASA Astrophysics Data System (ADS)
Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu
2017-12-01
The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.
NASA Astrophysics Data System (ADS)
Zokagoa, Jean-Marie; Soulaïmani, Azzeddine
2012-06-01
This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less
Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory
NASA Technical Reports Server (NTRS)
Lucia, David J.; Beran, Philip S.; Silva, Walter A.
2003-01-01
This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Fan, Yue
2002-01-01
By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable. The project supported by National Natural Science Foundation of China and Educational Ministry Foundation of China
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
NASA Astrophysics Data System (ADS)
Lombard, Jean-Eloi; Xu, Hui; Moxey, Dave; Sherwin, Spencer
2016-11-01
For open wheel race-cars, such as Formula One, or IndyCar, the wheels are responsible for 40 % of the total drag. For road cars, drag associated to the wheels and under-carriage can represent 20 - 60 % of total drag at highway cruise speeds. Experimental observations have reported two, three or more pairs of counter rotating vortices, the relative strength of which still remains an open question. The near wake of an unsteady rotating wheel. The numerical investigation by means of direct numerical simulation at ReD =400-1000 is presented here to further the understanding of bifurcations the flow undergoes as the Reynolds number is increased. Direct numerical simulation is performed using Nektar++, the results of which are compared to those of Pirozzoli et al. (2012). Both proper orthogonal decomposition and dynamic mode decomposition, as well as spectral analysis are leveraged to gain unprecedented insight into the bifurcations and subsequent topological differences of the wake as the Reynolds number is increased.
Thermogravimetric characterization and gasification of pecan nut shells.
Aldana, Hugo; Lozano, Francisco J; Acevedo, Joaquín; Mendoza, Alberto
2015-12-01
This study focuses on the evaluation of pecan nut shells as an alternative source of energy through pyrolysis and gasification. The physicochemical characteristics of the selected biomass that can influence the process efficiency, consumption rates, and the product yield, as well as create operational problems, were determined. In addition, the thermal decomposition kinetics necessary for prediction of consumption rates and yields were determined. Finally, the performance of a downdraft gasifier fed with pecan nut shells was analyzed in terms of process efficiency and exit gas characteristics. It was found that the pyrolytic decomposition of the nut shells can be modeled adequately using a single equation considering two independent parallel reactions. The performance of the gasification process can be influenced by the particle size and air flow rate, requiring a proper combination of these parameters for reliable operation and production of a valuable syngas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data
NASA Astrophysics Data System (ADS)
Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.
2016-12-01
We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).
DFT investigations of hydrogen storage materials
NASA Astrophysics Data System (ADS)
Wang, Gang
Hydrogen serves as a promising new energy source having no pollution and abundant on earth. However the most difficult problem of applying hydrogen is to store it effectively and safely, which is smartly resolved by attempting to keep hydrogen in some metal hydrides to reach a high hydrogen density in a safe way. There are several promising metal hydrides, the thermodynamic and chemical properties of which are to be investigated in this dissertation. Sodium alanate (NaAlH4) is one of the promising metal hydrides with high hydrogen storage capacity around 7.4 wt. % and relatively low decomposition temperature of around 100 °C with proper catalyst. Sodium hydride is a product of the decomposition of NaAlH4 that may affect the dynamics of NaAlH4. The two materials with oxygen contamination such as OH- may influence the kinetics of the dehydriding/rehydriding processes. Thus the solid solubility of OH - groups (NaOH) in NaAlH4 and NaH is studied theoretically by DFT calculations. Magnesium boride [Mg(BH4)2] is has higher hydrogen capacity about 14.9 wt. % and the decomposition temparture of around 250 °C. However one flaw restraining its application is that some polyboron compounds like MgB12H12 preventing from further release of hydrogen. Adding some transition metals that form magnesium transition metal ternary borohydride [MgaTMb(BH4)c] may simply the decomposition process to release hydrogen with ternary borides (MgaTMbBc). The search for the probable ternary borides and the corresponding pseudo phase diagrams as well as the decomposition thermodynamics are performed using DFT calculations and GCLP method to present some possible candidates.
NASA Astrophysics Data System (ADS)
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-10-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species.
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-01-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species. PMID:26515033
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Jo, Insu; Fridley, Jason D; Frank, Douglas A
2016-01-01
Invaders often have greater rates of production and produce more labile litter than natives. The increased litter quantity and quality of invaders should increase nutrient cycling through faster litter decomposition. However, the limited number of invasive species that have been included in decomposition studies has hindered the ability to generalize their impacts on decomposition rates. Further, previous decomposition studies have neglected roots. We measured litter traits and decomposition rates of leaves for 42 native and 36 nonnative woody species, and those of fine roots for 23 native and 25 nonnative species that occur in temperate deciduous forests throughout the Eastern USA. Among the leaf and root traits that differed between native and invasive species, only leaf nitrogen was significantly associated with decomposition rate. However, native and nonnative species did not differ systematically in leaf and root decomposition rates. We found that among the parameters measured, litter decomposer activity was driven by litter chemical quality rather than tissue density and structure. Our results indicate that litter decomposition rate per se is not a pathway by which forest woody invasive species affect North American temperate forest soil carbon and nutrient processes. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
The Parallel of Decomposition of Linear Programs
1989-11-01
length is 16*(3+86) = 1424 bytes for all the test problems. Sending a message involves loading it into a buffer and copying the buffer into the proper...3 + r.) Primal PoinL and Ray 16 * (3 + r) Dual Point or Ray 8 * (4 + r.) Table 4.2: Message sizes. into a buffer . Subproblems have one mailbox for...model,i.e., to disaggregate. For instance, "dairy products" becomes milk, cheese, yogurt and ice cream. Adding complexity allows a model to give a more
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
Structure disordering and thermal decomposition of manganese oxalate dihydrate, MnC2O4·2H2O
NASA Astrophysics Data System (ADS)
Puzan, Anna N.; Baumer, Vyacheslav N.; Lisovytskiy, Dmytro V.; Mateychenko, Pavel V.
2018-04-01
It is found that the known regular structures of MnC2O4·2H2O (I) do not allow to refine the powder X-ray pattern of (I) properly using the Rietveld method. Implementation of order-disorder scheme [28] via the including of appropriate displacement vector improves the refinement results. Also it is found that in the case of (I) the similar improvement may be achieved using the data on two phases of (I) obtained as result of decomposition MnC2O4·3H2O single crystal in the mother solution after growth. Thermal decomposition of (I) produce the anhydrous γ-MnC2O4 (II) the structure of which is differ from the known α- and β-modifications of VIIIb transition metal oxalates. The solved ab initio from the powder pattern structure (II) (space group Pmna, a = 7.1333 (1), b = 5.8787 (1), c = 9.0186 (2) Å, V = 378.19 (1) Å3, Z = 4 and Dx = 2.511 Mg m-3) contains seven-coordinated Mn atoms with Mn-O distances of 2.110-2.358 Å, and is not close-packed. Thermal decomposition of (II) in air flows via forming of amorphous MnO, the heating of which up to 723 K is accompanied by oxidation of MnO to Mn2O3 and further recrystallization of the latter.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
Newton–Hooke-type symmetry of anisotropic oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, P.M., E-mail: zhpm@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr; Laboratoire de Mathématiques et de Physique Théorique, Université de Tours
2013-06-15
Rotation-less Newton–Hooke-type symmetry, found recently in the Hill problem, and instrumental for explaining the center-of-mass decomposition, is generalized to an arbitrary anisotropic oscillator in the plane. Conversely, the latter system is shown, by the orbit method, to be the most general one with such a symmetry. Full Newton–Hooke symmetry is recovered in the isotropic case. Star escape from a galaxy is studied as an application. -- Highlights: ► Rotation-less Newton–Hooke (NH) symmetry is generalized to an arbitrary anisotropic oscillator. ► The orbit method is used to find the most general case for rotation-less NH symmetry. ► The NH symmetry ismore » decomposed into Heisenberg algebras based on chiral decomposition.« less
D. A. Perala; D.H. Alban
1982-01-01
Compares rates of forest floor decomposition and nutrient turnover in aspen and conifers. These rates were generally most rapid under aspen, slowest under spruce, and more rapid on a loamy fine sand than on a very fine sandy loam. Compares results with literature values.
s-core network decomposition: A generalization of k-core analysis to weighted networks
NASA Astrophysics Data System (ADS)
Eidsaa, Marius; Almaas, Eivind
2013-12-01
A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.
High-purity Cu nanocrystal synthesis by a dynamic decomposition method.
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-12-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.
High-purity Cu nanocrystal synthesis by a dynamic decomposition method
NASA Astrophysics Data System (ADS)
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-12-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
NASA Astrophysics Data System (ADS)
Jodin, Gurvan; Scheller, Johannes; Rouchon, Jean-François; Braza, Marianna; Mit Collaboration; Imft Collaboration; Laplace Collaboration
2016-11-01
A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is performed. Particle image velocimetry, as well as pressure and aerodynamic force measurements, are carried out on an airfoil model. This hybrid morphing wing model is equipped with both trailing edge piezoelectric-actuators and camber control shape memory alloy actuators. It will be shown that this actuation allows for an effective manipulation of the wake turbulent structures. Frequency domain analysis and proper orthogonal decomposition show that proper actuating reduces the energy dissipation by favoring more coherent vortical structures. This modification in the airflow dynamics eventually allows for a tapering of the wake thickness compared to the baseline configuration. Hence, drag reductions relative to the non-actuated trailing edge configuration are observed. Massachusetts Institute of Technology.
Self-similar pyramidal structures and signal reconstruction
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Leon, Manuel; Saliani, Sandra
1998-03-01
Pyramidal structures are defined which are locally a combination of low and highpass filtering. The structures are analogous to but different from wavelet packet structures. In particular, new frequency decompositions are obtained; and these decompositions can be parameterized to establish a correspondence with a large class of Cantor sets. Further correspondences are then established to relate such frequency decompositions with more general self- similarities. The role of the filters in defining these pyramidal structures gives rise to signal reconstruction algorithms, and these, in turn, are used in the analysis of speech data.
Decomposition of aquatic plants in lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godshalk, G.L.
1977-01-01
This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.
NASA Astrophysics Data System (ADS)
Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck
2018-02-01
The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
L.R. O' Halloran; E.T. Borer; E.W. Seabloom; A.S. MacDougall; E.E. Cleland; R.L. McCulley; S. Hobbie; S. Harpole; N.M. DeCrappeo; C.-J. Chu; J.D. Bakker; K.F. Davies; G. Du; J. Firn; N. Hagenah; K.S. Hofmockel; J.M.H. Knops; W. Li; B.A. Melbourne; J.W. Morgan; J.L. Orrock; S.M. Prober; C.J. Stevens
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a...
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
DISCO-SCA and Properly Applied GSVD as Swinging Methods to Find Common and Distinctive Processes
Van Deun, Katrijn; Van Mechelen, Iven; Thorrez, Lieven; Schouteden, Martijn; De Moor, Bart; van der Werf, Mariët J.; De Lathauwer, Lieven; Smilde, Age K.; Kiers, Henk A. L.
2012-01-01
Background In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration methods have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). Results Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by applying Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. Conclusions Both DISCO-SCA and properly applied GSVD are promising integrative methods for finding common and distinctive processes in multisource data. Open source code for both methods is provided. PMID:22693578
Some Remarks on Space-Time Decompositions, and Degenerate Metrics, in General Relativity
NASA Astrophysics Data System (ADS)
Bengtsson, Ingemar
Space-time decomposition of the Hilbert-Palatini action, written in a form which admits degenerate metrics, is considered. Simple numerology shows why D = 3 and 4 are singled out as admitting a simple phase space. The canonical structure of the degenerate sector turns out to be awkward. However, the real degenerate metrics obtained as solutions are the same as those that occur in Ashtekar's formulation of complex general relativity. An exact solution of Ashtekar's equations, with degenerate metric, shows that the manifestly four-dimensional form of the action, and its 3 + 1 form, are not quite equivalent.
Determination of the thermal stability of perfluoropolyalkyl ethers by tensimetry
NASA Technical Reports Server (NTRS)
Helmick, Larry A.; Jones, William R., Jr.
1992-01-01
The thermal decomposition temperatures of several perfluoropolyalkyl ether fluids were determined with a computerized tensimeter. In general, the decomposition temperatures of the commercial fluids were all similar and significantly higher than those for noncommercial fluids. Correlation of the decomposition temperatures with the molecular structures of the primary components of the commercial fluids revealed that the stability of the fluids was not affected by carbon chain length, branching, or adjacent difluoroformal groups. Instead, stability was limited by the presence of small quantities of thermally unstable material and/or chlorine-containing material arising from the use of chlorine containing solvents during synthesis. Finally, correlation of decomposition temperatures with molecular weights for two fluids supports a chain cleavage reaction mechanism for one and an unzipping reaction mechanism for the other.
NASA Astrophysics Data System (ADS)
Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.
1990-07-01
Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.
Benner, R.; Hatcher, P.G.; Hedges, J.I.
1990-01-01
Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.
Feng, Wenting; Liang, Junyi; Hale, Lauren E.; ...
2017-06-09
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wenting; Liang, Junyi; Hale, Lauren E.
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
Feng, Wenting; Liang, Junyi; Hale, Lauren E; Jung, Chang Gyo; Chen, Ji; Zhou, Jizhong; Xu, Minggang; Yuan, Mengting; Wu, Liyou; Bracho, Rosvel; Pegoraro, Elaine; Schuur, Edward A G; Luo, Yiqi
2017-11-01
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon-climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming. Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change. © 2017 John Wiley & Sons Ltd.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
Vector autoregressive models: A Gini approach
NASA Astrophysics Data System (ADS)
Mussard, Stéphane; Ndiaye, Oumar Hamady
2018-02-01
In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.
The Characteristics of Turbulence in Curved Pipes under Highly Pulsatile Flow Conditions
NASA Astrophysics Data System (ADS)
Kalpakli, A.; Örlü, R.; Tillmark, N.; Alfredsson, P. Henrik
High speed stereoscopic particle image velocimetry has been employed to provide unique data from a steady and highly pulsatile turbulent flow at the exit of a 90 degree pipe bend. Both the unsteady behaviour of the Dean cells under steady conditions, the so called "swirl switching" phenomenon, as well as the secondary flow under pulsations have been reconstructed through proper orthogonal decomposition. The present data set constitutes - to the authors' knowledge - the first detailed investigation of a turbulent, pulsatile flow through a pipe bend.
Dispersive analysis of ω/Φ → 3π, πγ*
Danilkin, Igor V.; Fernandez Ramirez, Cesar; Guo, Peng; ...
2015-05-01
The decays ω/Φ → 3π are considered in the dispersive framework that is based on the isobar decomposition and subenergy unitarity. The inelastic contributions are parametrized by the power series in a suitably chosen conformal variable that properly accounts for the analytic properties of the amplitude. The Dalitz plot distributions and integrated decay widths are presented. Our results indicate that the final- state interactions may be sizable. As a further application of the formalism we also compute the electromagnetic transition form factors of ω/Φ → π⁰γ*.
Cascaded systems analysis of noise and detectability in dual-energy cone-beam CT
Gang, Grace J.; Zbijewski, Wojciech; Webster Stayman, J.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: Dual-energy computed tomography and dual-energy cone-beam computed tomography (DE-CBCT) are promising modalities for applications ranging from vascular to breast, renal, hepatic, and musculoskeletal imaging. Accordingly, the optimization of imaging techniques for such applications would benefit significantly from a general theoretical description of image quality that properly incorporates factors of acquisition, reconstruction, and tissue decomposition in DE tomography. This work reports a cascaded systems analysis model that includes the Poisson statistics of x rays (quantum noise), detector model (flat-panel detectors), anatomical background, image reconstruction (filtered backprojection), DE decomposition (weighted subtraction), and simple observer models to yield a task-based framework for DE technique optimization. Methods: The theoretical framework extends previous modeling of DE projection radiography and CBCT. Signal and noise transfer characteristics are propagated through physical and mathematical stages of image formation and reconstruction. Dual-energy decomposition was modeled according to weighted subtraction of low- and high-energy images to yield the 3D DE noise-power spectrum (NPS) and noise-equivalent quanta (NEQ), which, in combination with observer models and the imaging task, yields the dual-energy detectability index (d′). Model calculations were validated with NPS and NEQ measurements from an experimental imaging bench simulating the geometry of a dedicated musculoskeletal extremities scanner. Imaging techniques, including kVp pair and dose allocation, were optimized using d′ as an objective function for three example imaging tasks: (1) kidney stone discrimination; (2) iodine vs bone in a uniform, soft-tissue background; and (3) soft tissue tumor detection on power-law anatomical background. Results: Theoretical calculations of DE NPS and NEQ demonstrated good agreement with experimental measurements over a broad range of imaging conditions. Optimization results suggest a lower fraction of total dose imparted by the low-energy acquisition, a finding consistent with previous literature. The selection of optimal kVp pair reveals the combined effect of both quantum noise and contrast in the kidney stone discrimination and soft-tissue tumor detection tasks, whereas the K-edge effect of iodine was the dominant factor in determining kVp pairs in the iodine vs bone task. The soft-tissue tumor task illustrated the benefit of dual-energy imaging in eliminating anatomical background noise and improving detectability beyond that achievable by single-energy scans. Conclusions: This work established a task-based theoretical framework that is predictive of DE image quality. The model can be utilized in optimizing a broad range of parameters in image acquisition, reconstruction, and decomposition, providing a useful tool for maximizing DE-CBCT image quality and reducing dose. PMID:22894440
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, T. W.; Ting, C.F.; Qu, Jun
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish differentmore » states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.« less
Material Compatibility with Space Storable Propellants. Design Guidebook
NASA Technical Reports Server (NTRS)
Uney, P. E.; Fester, D. A.
1972-01-01
An important consideration in the design of spacecraft for interplanetary missions is the compatibility of storage materials with the propellants. Serious problems can arise because many propellants are either extremely reactive or subject to catalytic decomposition, making the selection of proper materials of construction for propellant containment and control a critical requirement for the long-life applications. To aid in selecting materials and designing and evaluating various propulsion subsystems, available information on the compatibility of spacecraft materials with propellants of interest was compiled from literature searches and personal contacts. The compatibility of both metals and nonmetals with hydrazine, monomethyl hydrazine, nitrated hydrazine, and diborance fuels and nitrogen tetroxide, fluorine, oxygen difluoride, and Flox oxidizers was surveyed. These fuels and oxidizers encompass the wide variety of problems encountered in propellant storage. As such, they present worst case situations of the propellant affecting the material and the material affecting the propellant. This includes material attack, propellant decomposition, and the formation of clogging materials.
Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet
NASA Astrophysics Data System (ADS)
Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark
2017-11-01
Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.
Emissions of volatile organic compounds during the decomposition of plant litter
NASA Astrophysics Data System (ADS)
Gray, Christopher M.; Monson, Russell K.; Fierer, Noah
2010-09-01
Volatile organic compounds (VOCs) are emitted during plant litter decomposition, and such VOCs can have wide-ranging impacts on atmospheric chemistry, terrestrial biogeochemistry, and soil ecology. However, we currently have a limited understanding of the relative importance of biotic versus abiotic sources of these VOCs and whether distinct types of litter emit different types and quantities of VOCs during decomposition. We analyzed VOCs emitted by microbes or by abiotic mechanisms during the decomposition of litter from 12 plant species in a laboratory experiment using proton transfer reaction mass spectrometry (PTR-MS). Net emissions from litter with active microbial populations (non-sterile litters) were between 0 and 11 times higher than emissions from sterile controls over a 20-d incubation period, suggesting that abiotic sources of VOCs are generally less important than biotic sources. In all cases, the sterile and non-sterile litter treatments emitted different types of VOCs, with methanol being the dominant VOC emitted from litters during microbial decomposition, accounting for 78 to 99% of the net emissions. We also found that the types of VOCs released during biotic decomposition differed in a predictable manner among litter types with VOC profiles also changing as decomposition progressed over time. These results show the importance of incorporating both the biotic decomposition of litter and the species-dependent differences in terrestrial vegetation into global VOC emission models.
NASA Astrophysics Data System (ADS)
Djukic, Ika; Kappel Schmidt, Inger; Steenberg Larsen, Klaus; Beier, Claus
2017-04-01
Litter decomposition represents one of the largest fluxes in the global terrestrial carbon cycle and a number of large-scale decomposition experiments have been conducted focusing on this fundamental soil process. However, previous studies were most often based on site-specific litters and methodologies. The contrasting litter and soil types used and the general lack of common protocols still poses a major challenge as it adds major uncertainty to meta-analyses across different experiments and sites. In the TeaComposition initiative, we aim to investigate the potential litter decomposition by using standardized substrates (tea) for comparison of temporal litter decomposition rates across different ecosystems worldwide. To this end, Lipton tea bags (Rooibos and Green Tea) has been buried in the H-A or Ah horizon and incubated over the period of 36 months within 400 sites covering diverse ecosystems in 9 zonobiomes. We measured initial litter chemistry and litter mass loss 3 months after the start of decomposition and linked the decomposition rates to site and climatic conditions as well as to the existing decompositions rates of the local litter. We will present and discuss the outcomes of this study. Acknowledgment: We are thankful to colleagues from more than 300 sites who were participating in the implementation of this initiative and who are not mentioned individually as co-authors yet.
Challenges of including nitrogen effects on decomposition in earth system models
NASA Astrophysics Data System (ADS)
Hobbie, S. E.
2011-12-01
Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Chengjin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M. H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex. PMID:23405103
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Cheng-Jin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M.H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Hsu, P C; Springer, H K
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less
Decomposition and arthropod succession in Whitehorse, Yukon Territory, Canada.
Bygarski, Katherine; LeBlanc, Helene N
2013-03-01
Forensic arthropod succession patterns are known to vary between regions. However, the northern habitats of the globe have been largely left unstudied. Three pig carcasses were studied outdoors in Whitehorse, Yukon Territory. Adult and immature insects were collected for identification and comparison. The dominant Diptera and Coleoptera species at all carcasses were Protophormia terraneovae (R-D) (Fam: Calliphoridae) and Thanatophilus lapponicus (Herbst) (Fam: Silphidae), respectively. Rate of decomposition, patterns of Diptera and Coleoptera succession, and species dominance were shown to differ from previous studies in temperate regions, particularly as P. terraenovae showed complete dominance among blowfly species. Rate of decomposition through the first four stages was generally slow, and the last stage of decomposition was not observed at any carcass due to time constraints. It is concluded that biogeoclimatic range has a significant effect on insect presence and rate of decomposition, making it an important factor to consider when calculating a postmortem interval. © 2012 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Skitka, J.; Marston, B.; Fox-Kemper, B.
2016-02-01
Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By building a model reductively, starting with the infinite hierarchy of turbulence statistics, truncating at a given order, and stripping degrees of freedom from the flow, we offer the prospect a turbulence model and investigative tool that is equally applicable to all flow types and able to take full advantage of the wealth of nonlocal information in any flow. Direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants can be used to compute flow statistics of arbitrary order. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the quasi-linear and fully nonlinear statistics. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields.
ERIC Educational Resources Information Center
Man, Yiu-Kwong; Leung, Allen
2012-01-01
In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…
Deconvolution of reacting-flow dynamics using proper orthogonal and dynamic mode decompositions
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Hua, Jia-Chen; Barnhill, Will; Gunaratne, Gemunu H.; Gord, James R.
2015-01-01
Analytical and computational studies of reacting flows are extremely challenging due in part to nonlinearities of the underlying system of equations and long-range coupling mediated by heat and pressure fluctuations. However, many dynamical features of the flow can be inferred through low-order models if the flow constituents (e.g., eddies or vortices) and their symmetries, as well as the interactions among constituents, are established. Modal decompositions of high-frequency, high-resolution imaging, such as measurements of species-concentration fields through planar laser-induced florescence and of velocity fields through particle-image velocimetry, are the first step in the process. A methodology is introduced for deducing the flow constituents and their dynamics following modal decomposition. Proper orthogonal (POD) and dynamic mode (DMD) decompositions of two classes of problems are performed and their strengths compared. The first problem involves a cellular state generated in a flat circular flame front through symmetry breaking. The state contains two rings of cells that rotate clockwise at different rates. Both POD and DMD can be used to deconvolve the state into the two rings. In POD the contribution of each mode to the flow is quantified using the energy. Each DMD mode can be associated with an energy as well as a unique complex growth rate. Dynamic modes with the same spatial symmetry but different growth rates are found to be combined into a single POD mode. Thus, a flow can be approximated by a smaller number of POD modes. On the other hand, DMD provides a more detailed resolution of the dynamics. Two classes of reacting flows behind symmetric bluff bodies are also analyzed. In the first, symmetric pairs of vortices are released periodically from the two ends of the bluff body. The second flow contains von Karman vortices also, with a vortex being shed from one end of the bluff body followed by a second shedding from the opposite end. The way in which DMD can be used to deconvolve the second flow into symmetric and von Karman vortices is demonstrated. The analyses performed illustrate two distinct advantages of DMD: (1) Unlike proper orthogonal modes, each dynamic mode is associated with a unique complex growth rate. By comparing DMD spectra from multiple nominally identical experiments, it is possible to identify "reproducible" modes in a flow. We also find that although most high-energy modes are reproducible, some are not common between experimental realizations; in the examples considered, energy fails to differentiate between reproducible and nonreproducible modes. Consequently, it may not be possible to differentiate reproducible and nonreproducible modes in POD. (2) Time-dependent coefficients of dynamic modes are complex. Even in noisy experimental data, the dynamics of the phase of these coefficients (but not their magnitude) are highly regular. The phase represents the angular position of a rotating ring of cells and quantifies the downstream displacement of vortices in reacting flows. Thus, it is suggested that the dynamical characterizations of complex flows are best made through the phase dynamics of reproducible DMD modes.
Spagnoli, Laura; Amadasi, Alberto; Frustaci, Michela; Mazzarelli, Debora; Porta, Davide; Cattaneo, Cristina
2016-03-01
The distinction between cut marks and blunt force injuries on costal cartilages is a crucial issue in the forensic field. Moreover, a correct distinction may further be complicated by decomposition, so the need arises to investigate the distinctive features of lesions on cartilage and their changes over time. This study aimed to assess the stereomicroscopic features of cut marks (performed with six different knives) and blunt fractures (performed with a hammer and by means of manual bending) on 48 fragments of human costal cartilages. Moreover, in order to simulate decomposition, the cut and fractured surfaces were checked with stereomicroscopy and through casts after 1 and 2 days, 1 week, and 1, 2 and 4 months of drying in ambient air. In fresh samples, for single and unique cuts, striations were observed in between 44 and 88% of cases when non-serrated blades were used, and between 77 and 88% for serrated blades; in the case of "repeated" (back and forth movement) cuts, striations were detected in between 56 and 89% of cases for non-serrated blades, and between 66 and 100% for serrated blades. After only 1 week of decomposition the detection rates fell to percentages of between 28 and 39% for serrated blades and between 17 and 33% for non-serrated blades. Blunt force injuries showed non-specific characteristics, which, if properly assessed, may lead to a reliable distinction between different cut marks in fresh samples. The most evident alterations of the structure of the cartilage occurred in the first week of decomposition in ambient air. After one week of drying, the characteristics of cut marks were almost undetectable, thereby making it extremely challenging to distinguish between cut marks, blunt force fractures and taphonomic effects. The study represents a contribution to the correct assessment and distinction of cut marks and blunt force injuries on cartilages, providing a glimpse on the modifications such lesions may undergo with decomposition.
ON THE DECOMPOSITION OF STRESS AND STRAIN TENSORS INTO SPHERICAL AND DEVIATORIC PARTS
Augusti, G.; Martin, J. B.; Prager, W.
1969-01-01
It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754
NASA Astrophysics Data System (ADS)
Wood, J. H.; Natali, S.
2014-12-01
The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.
Monroy, Silvia; Menéndez, Margarita; Basaguren, Ana; Pérez, Javier; Elosegi, Arturo; Pozo, Jesús
2016-12-15
Drought, an important environmental factor affecting the functioning of stream ecosystems, is likely to become more prevalent in the Mediterranean region as a consequence of climate change and enhanced water demand. Drought can have profound impacts on leaf litter decomposition, a key ecosystem process in headwater streams, but there is still limited information on its effects at the regional scale. We measured leaf litter decomposition across a gradient of aridity in the Ebro River basin. We deployed coarse- and fine-mesh bags with alder and oak leaves in 11 Mediterranean calcareous streams spanning a range of over 400km, and determined changes in discharge, water quality, leaf-associated macroinvertebrates, leaf quality and decomposition rates. The study streams were subject to different degrees of drought, specific discharge (Ls -1 km -2 ) ranging from 0.62 to 9.99. One of the streams dried out during the experiment, another one reached residual flow, whereas the rest registered uninterrupted flow but with different degrees of flow variability. Decomposition rates differed among sites, being lowest in the 2 most water-stressed sites, but showed no general correlation with specific discharge. Microbial decomposition rates were not correlated with final nutrient content of litter nor to fungal biomass. Total decomposition rate of alder was positively correlated to the density and biomass of shredders; that of oak was not. Shredder density in alder bags showed a positive relationship with specific discharge during the decomposition experiment. Overall, the results point to a complex pattern of litter decomposition at the regional scale, as drought affects decomposition directly by emersion of bags and indirectly by affecting the functional composition and density of detritivores. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
NASA Astrophysics Data System (ADS)
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino
2014-02-11
Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Soil organic matter decomposition follows plant productivity response to sea-level rise
NASA Astrophysics Data System (ADS)
Mueller, Peter; Jensen, Kai; Megonigal, James Patrick
2015-04-01
The accumulation of soil organic matter (SOM) is an important mechanism for many tidal wetlands to keep pace with sea-level rise. SOM accumulation is governed by the rates of production and decomposition of organic matter. While plant productivity responses to sea-level rise are well understood, far less is known about the response of SOM decomposition to accelerated sea-level rise. Here we quantified the effects of sea-level rise on SOM decomposition by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian Global Change Research Wetland, a micro tidal brackish marsh in Maryland, US. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated using a stable carbon isotope approach. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to varying flood duration over a 35 cm range in surface elevation in unplanted mesocoms. In the presence of plants, decomposition rates were strongly and positively related to aboveground biomass (p≤0.01, R2≥0.59). We conclude that rates of soil carbon loss through decomposition are driven by plant responses to sea level in this intensively studied tidal marsh. If our result applies more generally to tidal wetlands, it has important implications for modeling carbon sequestration and marsh accretion in response to accelerated sea-level rise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.; Kaneshige, Michael J.; Erikson, William W.
In this study, we have made reasonable cookoff predictions of large-scale explosive systems by using pressure-dependent kinetics determined from small-scale experiments. Scale-up is determined by properly accounting for pressure generated from gaseous decomposition products and the volume that these reactive gases occupy, e.g. trapped within the explosive, the system, or vented. The pressure effect on the decomposition rates has been determined for different explosives by using both vented and sealed experiments at low densities. Low-density explosives are usually permeable to decomposition gases and can be used in both vented and sealed configurations to determine pressure-dependent reaction rates. In contrast, explosivesmore » that are near the theoretical maximum density (TMD) are not as permeable to decomposition gases, and pressure-dependent kinetics are difficult to determine. Ignition in explosives at high densities can be predicted by using pressure-dependent rates determined from the low-density experiments as long as gas volume changes associated with bulk thermal expansion are also considered. In the current work, cookoff of the plastic-bonded explosives PBX 9501 and PBX 9502 is reviewed and new experimental work on LX-14 is presented. Reactive gases are formed inside these heated explosives causing large internal pressures. The pressure is released differently for each of these explosives. For PBX 9501, permeability is increased and internal pressure is relieved as the nitroplasticizer melts and decomposes. Internal pressure in PBX 9502 is relieved as the material is damaged by cracks and spalling. For LX-14, internal pressure is not relieved until the explosive thermally ignites. The current paper is an extension of work presented at the 26th ICDERS symposium [1].« less
Differential decomposition of bacterial and viral fecal indicators in common human pollution types.
Wanjugi, Pauline; Sivaganesan, Mano; Korajkic, Asja; Kelty, Catherine A; McMinn, Brian; Ulrich, Robert; Harwood, Valerie J; Shanks, Orin C
2016-11-15
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water quality management practices, as well as predicting associated public health risks. Here, the decomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated genetic indicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linear regression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log 10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbiota play a lesser role. For bacterial cultivated and genetic indicators, the influence of indigenous microbiota varied by pollution type. This study offers new insights on the decomposition of common human fecal pollution types in a subtropical marine environment with important implications for water quality management applications. Published by Elsevier Ltd.
Characteristic of root decomposition in a tropical rainforest in Sarawak, Malaysi
NASA Astrophysics Data System (ADS)
Ohashi, Mizue; Makita, Naoki; Katayam, Ayumi; Kume, Tomonori; Matsumoto, Kazuho; Khoon Kho, L.
2016-04-01
Woody roots play a significant role in forest carbon cycling, as up to 60 percent of tree photosynthetic production can be allocated to belowground. Root decay is one of the main processes of soil C dynamics and potentially relates to soil C sequestration. However, much less attention has been paid for root litter decomposition compared to the studies of leaf litter because roots are hidden from view. Previous studies have revealed that physico-chemical quality of roots, climate, and soil organisms affect root decomposition significantly. However, patterns and mechanisms of root decomposition are still poorly understood because of the high variability of root properties, field environment and potential decomposers. For example, root size would be a factor controlling decomposition rates, but general understanding of the difference between coarse and fine root decompositions is still lacking. Also, it is known that root decomposition is performed by soil animals, fungi and bacteria, but their relative importance is poorly understood. In this study, therefore, we aimed to characterize the root decomposition in a tropical rainforest in Sarawak, Malaysia, and clarify the impact of soil living organisms and root sizes on root litter decomposition. We buried soil cores with fine and coarse root litter bags in soil in Lambir Hills National Park. Three different types of soil cores that are covered by 1.5 cm plastic mesh, root-impermeable sheet (50um) and fungi-impermeable sheet (1um) were prepared. The soil cores were buried in February 2013 and collected 4 times, 134 days, 226 days, 786 days and 1151 days after the installation. We found that nearly 80 percent of the coarse root litter was decomposed after two years, whereas only 60 percent of the fine root litter was decomposed. Our results also showed significantly different ratio of decomposition between different cores, suggesting the different contribution of soil living organisms to decomposition process.
NASA Astrophysics Data System (ADS)
Petrishcheva, E.; Abart, R.
2012-04-01
We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.
New Aspects of Probabilistic Forecast Verification Using Information Theory
NASA Astrophysics Data System (ADS)
Tödter, Julian; Ahrens, Bodo
2013-04-01
This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.
Catalytic Decomposition of Hydroxylammonium Nitrate Ionic Liquid: Enhancement of NO Formation.
Chambreau, Steven D; Popolan-Vaida, Denisia M; Vaghjiani, Ghanshyam L; Leone, Stephen R
2017-05-18
Hydroxylammonium nitrate (HAN) is a promising candidate to replace highly toxic hydrazine in monopropellant thruster space applications. The reactivity of HAN aerosols on heated copper and iridium targets was investigated using tunable vacuum ultraviolet photoionization time-of-flight aerosol mass spectrometry. The reaction products were identified by their mass-to-charge ratios and their ionization energies. Products include NH 3 , H 2 O, NO, hydroxylamine (HA), HNO 3 , and a small amount of NO 2 at high temperature. No N 2 O was detected under these experimental conditions, despite the fact that N 2 O is one of the expected products according to the generally accepted thermal decomposition mechanism of HAN. Upon introduction of iridium catalyst, a significant enhancement of the NO/HA ratio was observed. This observation indicates that the formation of NO via decomposition of HA is an important pathway in the catalytic decomposition of HAN.
Determination of the thermal stability of perfluoroalkylethers
NASA Technical Reports Server (NTRS)
Helmick, Larry S.; Jones, William R., Jr.
1990-01-01
The thermal decomposition temperatures of several commercial and custom synthesized perfluoroalkylether fluids were determined with a computerized tensimeter. In general, the decomposition temperatures of the commercial fluids were all similar and significantly higher than those for custom synthesized fluids. Correlation of the decomposition temperatures with the molecular structures of the primary components of the commercial fluids revealed that the stability of the fluids is not affected by intrinsic factors such as carbon chain length, branching, or cumulated difluoroformal groups. Instead, correlation with extrinsic factors revealed that the stability may be limited by the presence of small quantities of thermally unstable material and/or chlorine-containing material arising from the use of chlorine-containing solvents during synthesis. Finally, correlation of decomposition temperatures with molecular weights for Demnum and Krytox fluids supports a chain cleavage reaction mechanism for Demnum fluids and an unzipping reaction mechanism for Krytox fluids.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Non-intrusive reduced order modeling of nonlinear problems using neural networks
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Ubbiali, S.
2018-06-01
We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.
The Local Stellar Velocity Field via Vector Spherical Harmonics
NASA Technical Reports Server (NTRS)
Makarov, V. V.; Murphy, D. W.
2007-01-01
We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism.We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) = (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) = (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star...
Information and complexity measures in the interface of a metal and a superconductor
NASA Astrophysics Data System (ADS)
Moustakidis, Ch. C.; Panos, C. P.
2018-06-01
Fisher information, Shannon information entropy and Statistical Complexity are calculated for the interface of a normal metal and a superconductor, as a function of the temperature for several materials. The order parameter Ψ (r) derived from the Ginzburg-Landau theory is used as an input together with experimental values of critical transition temperature Tc and the superconducting coherence length ξ0. Analytical expressions are obtained for information and complexity measures. Thus Tc is directly related in a simple way with disorder and complexity. An analytical relation is found of the Fisher Information with the energy profile of superconductivity i.e. the ratio of surface free energy and the bulk free energy. We verify that a simple relation holds between Shannon and Fisher information i.e. a decomposition of a global information quantity (Shannon) in terms of two local ones (Fisher information), previously derived and verified for atoms and molecules by Liu et al. Finally, we find analytical expressions for generalized information measures like the Tsallis entropy and Fisher information. We conclude that the proper value of the non-extensivity parameter q ≃ 1, in agreement with previous work using a different model, where q ≃ 1.005.
Efficient material decomposition method for dual-energy X-ray cargo inspection system
NASA Astrophysics Data System (ADS)
Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong
2018-03-01
Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.
Plane waves and structures in turbulent channel flow
NASA Technical Reports Server (NTRS)
Sirovich, L.; Ball, K. S.; Keefe, L. R.
1990-01-01
A direct simulation of turbulent flow in a channel is analyzed by the method of empirical eigenfunctions (Karhunen-Loeve procedure, proper orthogonal decomposition). This analysis reveals the presence of propagating plane waves in the turbulent flow. The velocity of propagation is determined by the flow velocity at the location of maximal Reynolds stress. The analysis further suggests that the interaction of these waves appears to be essential to the local production of turbulence via bursting or sweeping events in the turbulent boundary layer, with the additional suggestion that the fast acting plane waves act as triggers.
NASA Technical Reports Server (NTRS)
Gatski, Thomas B. (Editor); Sarkar, Sutanu (Editor); Speziale, Charles G. (Editor)
1992-01-01
Various papers on turbulence are presented. Individual topics addressed include: modeling the dissipation rate in rotating turbulent flows, mapping closures for turbulent mixing and reaction, understanding turbulence in vortex dynamics, models for the structure and dynamics of near-wall turbulence, complexity of turbulence near a wall, proper orthogonal decomposition, propagating structures in wall-bounded turbulence flows. Also discussed are: constitutive relation in compressible turbulence, compressible turbulence and shock waves, direct simulation of compressible turbulence in a shear flow, structural genesis in wall-bounded turbulence flows, vortex lattice structure of turbulent shear slows, etiology of shear layer vortices, trilinear coordinates in fluid mechanics.
Zhang, Weidong; Chao, Lin; Yang, Qingpeng; Wang, Qingkui; Fang, Yunting; Wang, Silong
2016-10-01
Nitrogen addition has been shown to affect plant litter decomposition in terrestrial ecosystems. The way that nitrogen deposition impacts the relationship between plant litter decomposition and altered soil nitrogen availability is unclear, however. This study examined 18 co-occurring litter types in a subtropical forest in China in terms of their decomposition (1 yr of exposure in the field) with nitrogen addition treatment (0, 0.4, 1.6, and 4.0 mol·N·m -2 ·yr -1 ) and soil fauna exclusion (litter bags with 0.1 and 2 cm mesh size). Results showed that the plant litter decomposition rate is significantly reduced because of nitrogen addition; the strength of the nitrogen addition effect is closely related to the nitrogen addition levels. Plant litters with diverse quality responded to nitrogen addition differently. When soil fauna was present, the nitrogen addition effect on medium-quality or high-quality plant litter decomposition rate was -26% ± 5% and -29% ± 4%, respectively; these values are significantly higher than that of low-quality plant litter decomposition. The pattern is similar when soil fauna is absent. In general, the plant litter decomposition rate is decreased by soil fauna exclusion; an average inhibition of -17% ± 1.5% was exhibited across nitrogen addition treatment and litter quality groups. However, this effect is weakly related to nitrogen addition treatment and plant litter quality. We conclude that the variations in plant litter quality, nitrogen deposition, and soil fauna are important factors of decomposition and nutrient cycling in a subtropical forest ecosystem. © 2016 by the Ecological Society of America.
ERIC Educational Resources Information Center
Nyasulu, Frazier; Barlag, Rebecca
2010-01-01
The reaction kinetics of the iodide-catalyzed decomposition of [subscript 2]O[subscript 2] using the integrated-rate method is described. The method is based on the measurement of the total gas pressure using a datalogger and pressure sensor. This is a modification of a previously reported experiment based on the initial-rate approach. (Contains 2…
Zernike expansion of derivatives and Laplacians of the Zernike circle polynomials.
Janssen, A J E M
2014-07-01
The partial derivatives and Laplacians of the Zernike circle polynomials occur in various places in the literature on computational optics. In a number of cases, the expansion of these derivatives and Laplacians in the circle polynomials are required. For the first-order partial derivatives, analytic results are scattered in the literature. Results start as early as 1942 in Nijboer's thesis and continue until present day, with some emphasis on recursive computation schemes. A brief historic account of these results is given in the present paper. By choosing the unnormalized version of the circle polynomials, with exponential rather than trigonometric azimuthal dependence, and by a proper combination of the two partial derivatives, a concise form of the expressions emerges. This form is appropriate for the formulation and solution of a model wavefront sensing problem of reconstructing a wavefront on the level of its expansion coefficients from (measurements of the expansion coefficients of) the partial derivatives. It turns out that the least-squares estimation problem arising here decouples per azimuthal order m, and per m the generalized inverse solution assumes a concise analytic form so that singular value decompositions are avoided. The preferred version of the circle polynomials, with proper combination of the partial derivatives, also leads to a concise analytic result for the Zernike expansion of the Laplacian of the circle polynomials. From these expansions, the properties of the Laplacian as a mapping from the space of circle polynomials of maximal degree N, as required in the study of the Neumann problem associated with the transport-of-intensity equation, can be read off within a single glance. Furthermore, the inverse of the Laplacian on this space is shown to have a concise analytic form.
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M
2017-10-25
Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Adaptive matching of the iota ring linear optics for space charge compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, A.; Bruhwiler, D. L.; Cook, N.
Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a searchmore » for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters« less
Dynamics of flow control in an emulated boundary layer-ingesting offset diffuser
NASA Astrophysics Data System (ADS)
Gissen, A. N.; Vukasinovic, B.; Glezer, A.
2014-08-01
Dynamics of flow control comprised of arrays of active (synthetic jets) and passive (vanes) control elements , and its effectiveness for suppression of total-pressure distortion is investigated experimentally in an offset diffuser, in the absence of internal flow separation. The experiments are conducted in a wind tunnel inlet model at speeds up to M = 0.55 using approach flow conditioning that mimics boundary layer ingestion on a Blended-Wing-Body platform. Time-dependent distortion of the dynamic total-pressure field at the `engine face' is measured using an array of forty total-pressure probes, and the control-induced distortion changes are analyzed using triple decomposition and proper orthogonal decomposition (POD). These data indicate that an array of the flow control small-scale synthetic jet vortices merge into two large-scale, counter-rotating streamwise vortices that exert significant changes in the flow distortion. The two most energetic POD modes appear to govern the distortion dynamics in either active or hybrid flow control approaches. Finally, it is shown that the present control approach is sufficiently robust to reduce distortion with different inlet conditions of the baseline flow.
Kazemi, Khoshrooz; Zhang, Baiyu; Lye, Leonard M; Cai, Qinghong; Cao, Tong
2016-12-01
A design of experiment (DOE) based methodology was adopted in this study to investigate the effects of multiple factors and their interactions on the performance of a municipal solid waste (MSW) composting process. The impact of four factors, carbon/nitrogen ratio (C/N), moisture content (MC), type of bulking agent (BA) and aeration rate (AR) on the maturity, stability and toxicity of compost product was investigated. The statistically significant factors were identified using final C/N, germination index (GI) and especially the enzyme activities as responses. Experimental results validated the use of enzyme activities as proper indices during the course of composting. Maximum enzyme activities occurred during the active phase of decomposition. MC has a significant effect on dehydrogenase activity (DGH), β-glucosidase activity (BGH), phosphodiesterase activity (PDE) and the final moisture content of the compost. C/N is statistically significant for final C/N, DGH, BGH, and GI. The results provided guidance to optimize a MSW composting system that will lead to increased decomposition rate and the production of more stable and mature compost. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thermal stability of epitaxial SrRuO3 films as a function of oxygen pressure
NASA Astrophysics Data System (ADS)
Lee, Ho Nyung; Christen, Hans M.; Chisholm, Matthew F.; Rouleau, Christopher M.; Lowndes, Douglas H.
2004-05-01
The thermal stability of electrically conducting SrRuO3 thin films grown by pulsed-laser deposition on (001) SrTiO3 substrates has been investigated by atomic force microscopy and reflection high-energy electron diffraction (RHEED) under reducing conditions (25-800 °C in 10-7-10-2 Torr O2). The as-grown SrRuO3 epitaxial films exhibit atomically flat surfaces with single unit-cell steps, even after exposure to air at room temperature. The films remain stable at temperatures as high as 720 °C in moderate oxygen ambients (>1 mTorr), but higher temperature anneals at lower pressures result in the formation of islands and pits due to the decomposition of SrRuO3. Using in situ RHEED, a temperature and oxygen pressure stability map was determined, consistent with a thermally activated decomposition process having an activation energy of 88 kJ/mol. The results can be used to determine the proper conditions for growth of additional epitaxial oxide layers on high quality electrically conducting SrRuO3.
Shpotyuk, O; Bujňáková, Z; Baláž, P; Ingram, A; Shpotyuk, Y
2016-01-05
Positron annihilation lifetime spectroscopy was applied to characterize free-volume structure of polyvinylpyrrolidone used as nonionic stabilizer in the production of many nanocomposite pharmaceuticals. The polymer samples with an average molecular weight of 40,000 g mol(-1) were pelletized in a single-punch tableting machine under an applied pressure of 0.7 GPa. Strong mixing in channels of positron and positronium trapping were revealed in the polyvinylpyrrolidone pellets. The positron lifetime spectra accumulated under normal measuring statistics were analysed in terms of unconstrained three- and four-term decomposition, the latter being also realized under fixed 0.125 ns lifetime proper to para-positronium self-annihilation in a vacuum. It was shown that average positron lifetime extracted from each decomposition was primary defined by long-lived ortho-positronium component. The positron lifetime spectra treated within unconstrained three-term fitting were in obvious preference, giving third positron lifetime dominated by ortho-positronium pick-off annihilation in a polymer matrix. This fitting procedure was most meaningful, when analysing expected positron trapping sites in polyvinylpyrrolidone-stabilized nanocomposite pharmaceuticals. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of swirl spray interaction in lab scale combustor using time-resolved measurements
NASA Astrophysics Data System (ADS)
Rajamanickam, Kuppuraj; Jain, Manish; Basu, Saptarshi
2017-11-01
Liquid fuel injection in highly turbulent swirling flows becomes common practice in gas turbine combustors to improve the flame stabilization. It is well known that the vortex bubble breakdown (VBB) phenomenon in strong swirling jets exhibits complicated flow structures in the spatial domain. In this study, the interaction of hollow cone liquid sheet with such coaxial swirling flow field has been studied experimentally using time-resolved measurements. In particular, much attention is focused towards the near field breakup mechanism (i.e. primary atomization) of liquid sheet. The detailed swirling gas flow field characterization is carried out using time-resolved PIV ( 3.5 kHz). Furthermore, the complicated breakup mechanisms and interaction of the liquid sheet are imaged with the help of high-speed shadow imaging system. Subsequently, proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) is implemented over the instantaneous data sets to retrieve the modal information associated with the interaction dynamics. This helps to delineate more quantitative nature of interaction process between the liquid sheet and swirling gas phase flow field.
A low dimensional dynamical system for the wall layer
NASA Technical Reports Server (NTRS)
Aubry, N.; Keefe, L. R.
1987-01-01
Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.
Modal Structures in flow past a cylinder
NASA Astrophysics Data System (ADS)
Murshed, Mohammad
2017-11-01
With the advent of data, there have been opportunities to apply formalism to detect patterns or simple relations. For instance, a phenomenon can be defined through a partial differential equation which may not be very useful right away, whereas a formula for the evolution of a primary variable may be interpreted quite easily. Having access to data is not enough to move on since doing advanced linear algebra can put strain on the way computations are being done. A canonical problem in the field of aerodynamics is the transient flow past a cylinder where the viscosity can be adjusted to set the Reynolds number (Re). We observe the effect of the critical Re on the certain modes of behavior in time scale. A 2D-velocity field works as an input to analyze the modal structure of the flow using the Proper Orthogonal Decomposition and Koopman Mode/Dynamic Mode Decomposition. This will enable prediction of the solution further in time (taking into account the dependence on Re) and help us evaluate and discuss the associated error in the mechanism.
Civil Engineering Corrosion Control. Volume 1. Corrosion Control - General
1975-01-01
is generated in the boiler by the decomposition of carbonates and bicar- bonates of sodium, calcium, and magnesium. (c) The pH Range. Natural waters...and products of decomposition Acting as either anodic or cathodic depolarizers. 4.4.1 Forms of Microorganisms. In almost any soil or water, there are... 1945 . Based on field tests of the Iron and Steel Institute Corrosion Committee reported by J.C. Hudson (J. Iron Steel Inst., 11, 209, 1943), with
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Ocean Models and Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Salas-de-Leon, D. A.
2007-05-01
The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.
Balasubramanian, Madhusudhanan; Žabić, Stanislav; Bowd, Christopher; Thompson, Hilary W.; Wolenski, Peter; Iyengar, S. Sitharama; Karki, Bijaya B.; Zangwill, Linda M.
2009-01-01
Glaucoma is the second leading cause of blindness worldwide. Often the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this work, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L1 and L2 norms, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUC) were used to compare the diagnostic performance of the POD induced parameters with the parameters of Topographic Change Analysis (TCA) method. The IMED and L2 norm parameters in the POD framework provided the highest AUC of 0.94 at 10° field of imaging and 0.91 at 15° field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88 respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management. PMID:19369163
Polarimetric purity and the concept of degree of polarization
NASA Astrophysics Data System (ADS)
Gil, José J.; Norrman, Andreas; Friberg, Ari T.; Setälä, Tero
2018-02-01
The concept of degree of polarization for electromagnetic waves, in its general three-dimensional version, is revisited in the light of the implications of the recent findings on the structure of polarimetric purity and of the existence of nonregular states of polarization [J. J. Gil et al., Phys Rev. A 95, 053856 (2017), 10.1103/PhysRevA.95.053856]. From the analysis of the characteristic decomposition of a polarization matrix R into an incoherent convex combination of (1) a pure state Rp, (2) a middle state Rm given by an equiprobable mixture of two eigenstates of R, and (3) a fully unpolarized state Ru -3 D, it is found that, in general, Rm exhibits nonzero circular and linear degrees of polarization. Therefore, the degrees of linear and circular polarization of R cannot always be assigned to the single totally polarized component Rp. It is shown that the parameter P3 D proposed formerly by Samson [J. C. Samson, Geophys. J. R. Astron. Soc. 34, 403 (1973), 10.1111/j.1365-246X.1973.tb02404.x] takes into account, in a proper and objective form, all the contributions to polarimetric purity, namely, the contributions to the linear and circular degrees of polarization of R as well as to the stability of the plane containing its polarization ellipse. Consequently, P3 D constitutes a natural representative of the degree of polarimetric purity. Some implications for the common convention for the concept of two-dimensional degree of polarization are also analyzed and discussed.
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
Djukic, Ika; Zehetner, Franz; Watzinger, Andrea; Horacek, Micha; Gerzabek, Martin H
2013-01-01
Litter decomposition represents one of the largest fluxes in the global terrestrial carbon cycle. The aim of this study was to improve our understanding of the factors governing decomposition in alpine ecosystems and how their responses to changing environmental conditions change over time. Our study area stretches over an elevation gradient of 1000 m on the Hochschwab massif in the Northern Limestone Alps of Austria. We used high-to-low elevation soil translocation to simulate the combined effects of changing climatic conditions, shifting vegetation zones, and altered snow cover regimes. In original and translocated soils, we conducted in situ decomposition experiments with maize litter and studied carbon turnover dynamics as well as temporal response patterns of the pathways of carbon during microbial decomposition over a 2-year incubation period. A simulated mean annual soil warming (through down-slope translocation) of 1.5 and 2.7 °C, respectively, resulted in a significantly accelerated turnover of added maize carbon. Changes in substrate quantity and quality in the course of the decomposition appeared to have less influence on the microbial community composition and its substrate utilization than the prevailing environmental/site conditions, to which the microbial community adapted quickly upon change. In general, microbial community composition and function significantly affected substrate decomposition rates only in the later stage of decomposition when the differentiation in substrate use among the microbial groups became more evident. Our study demonstrated that rising temperatures in alpine ecosystems may accelerate decomposition of litter carbon and also lead to a rapid adaptation of the microbial communities to the new environmental conditions. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.
Gutmann, Bernhard; Glasnov, Toma N; Razzaq, Tahseen; Goessler, Walter; Roberge, Dominique M
2011-01-01
Summary The decomposition of 5-benzhydryl-1H-tetrazole in an N-methyl-2-pyrrolidone/acetic acid/water mixture was investigated under a variety of high-temperature reaction conditions. Employing a sealed Pyrex glass vial and batch microwave conditions at 240 °C, the tetrazole is comparatively stable and complete decomposition to diphenylmethane requires more than 8 h. Similar kinetic data were obtained in conductively heated flow devices with either stainless steel or Hastelloy coils in the same temperature region. In contrast, in a flow instrument that utilizes direct electric resistance heating of the reactor coil, tetrazole decomposition was dramatically accelerated with rate constants increased by two orders of magnitude. When 5-benzhydryl-1H-tetrazole was exposed to 220 °C in this type of flow reactor, decomposition to diphenylmethane was complete within 10 min. The mechanism and kinetic parameters of tetrazole decomposition under a variety of reaction conditions were investigated. A number of possible explanations for these highly unusual rate accelerations are presented. In addition, general aspects of reactor degradation, corrosion and contamination effects of importance to continuous flow chemistry are discussed. PMID:21647324
Adamopoulou, Theodora; Papadaki, Maria I; Kounalakis, Manolis; Vazquez-Carreto, Victor; Pineda-Solano, Alba; Wang, Qingsheng; Mannan, M Sam
2013-06-15
Thermal decomposition of hydroxylamine, NH2OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130-150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30-80 ml solutions containing 1.4-20 g of pure hydroxylamine (2.8-40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3-5 kJ g(-1). The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate. Copyright © 2013 Elsevier B.V. All rights reserved.
Delay decomposition at a single server queue with constant service time and multiple inputs
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1978-01-01
Two network consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self-delay and interference delay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brezov, D. S.; Mladenova, C. D.; Mladenov, I. M., E-mail: mladenov@bio21.bas.bg
In this paper we obtain the Lie derivatives of the scalar parameters in the generalized Euler decomposition with respect to arbitrary axes under left and right deck transformations. This problem can be directly related to the representation of the angular momentum in quantum mechanics. As a particular example, we calculate the angular momentum and the corresponding quantum hamiltonian in the standard Euler and Bryan representations. Similarly, in the hyperbolic case, the Laplace-Beltrami operator is retrieved for the Iwasawa decomposition. The case of two axes is considered as well.
33 CFR 17.05-5 - Acceptance and disbursement of gifts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... gifts. 17.05-5 Section 17.05-5 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY GENERAL UNITED STATES COAST GUARD GENERAL GIFT FUND Administration § 17.05-5 Acceptance and disbursement of gifts. (a) The immediate receiving person shall give a proper receipt on the proper form used...
NASA Astrophysics Data System (ADS)
Nigmatullin, R.; Rakhmatullin, R.
2014-12-01
Many experimentalists were accustomed to think that any independent measurement forms a non-correlated measurement that depends weakly from others. We are trying to reconsider this conventional point of view and prove that similar measurements form a strongly-correlated sequence of random functions with memory. In other words, successive measurements "remember" each other at least their nearest neighbors. This observation and justification on real data help to fit the wide set of data based on the Prony's function. The Prony's decomposition follows from the quasi-periodic (QP) properties of the measured functions and includes the Fourier transform as a partial case. New type of decomposition helps to obtain a specific amplitude-frequency response (AFR) of the measured (random) functions analyzed and each random function contains less number of the fitting parameters in comparison with its number of initial data points. Actually, the calculated AFR can be considered as the generalized Prony's spectrum (GPS), which will be extremely useful in cases where the simple model pretending on description of the measured data is absent but vital necessity of their quantitative description is remained. These possibilities open a new way for clusterization of the initial data and new information that is contained in these data gives a chance for their detailed analysis. The electron paramagnetic resonance (EPR) measurements realized for empty resonator (pure noise data) and resonator containing a sample (CeO2 in our case) confirmed the existence of the QP processes in reality. But we think that the detection of the QP processes is a common feature of many repeated measurements and this new property of successive measurements can attract an attention of many experimentalists. To formulate some general conditions that help to identify and then detect the presence of some QP process in the repeated experimental measurements. To find a functional equation and its solution that yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].
Proper Conformal Killing Vectors in Kantowski-Sachs Metric
NASA Astrophysics Data System (ADS)
Hussain, Tahir; Farhan, Muhammad
2018-04-01
This paper deals with the existence of proper conformal Killing vectors (CKVs) in Kantowski-Sachs metric. Subject to some integrability conditions, the general form of vector filed generating CKVs and the conformal factor is presented. The integrability conditions are solved generally as well as in some particular cases to show that the non-conformally flat Kantowski-Sachs metric admits two proper CKVs, while it admits a 15-dimensional Lie algebra of CKVs in the case when it becomes conformally flat. The inheriting conformal Killing vectors (ICKVs), which map fluid lines conformally, are also investigated.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
Steganography based on pixel intensity value decomposition
NASA Astrophysics Data System (ADS)
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Unsteady features of the flow on a bump in transonic environment
NASA Astrophysics Data System (ADS)
Budovsky, A. D.; Sidorenko, A. A.; Polivanov, P. A.; Vishnyakov, O. I.; Maslov, A. A.
2016-10-01
The study deals with experimental investigation of unsteady features of separated flow on a profiled bump in transonic environment. The experiments were conducted in T-325 wind tunnel of ITAM for the following flow conditions: P0 = 1 bar, T0 = 291 K. The base flow around the model was studied by schlieren visualization, steady and unsteady wall pressure measurements and PIV. The experimentally data obtained using PIV are analyzed by Proper Orthogonal Decomposition (POD) technique to investigate the underlying unsteady flow organization, as revealed by the POD eigenmodes. The data obtained show that flow pulsations revealed upstream and downstream of shock wave are correlated and interconnected.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
Plants mediate soil organic matter decomposition in response to sea level rise.
Mueller, Peter; Jensen, Kai; Megonigal, James Patrick
2016-01-01
Tidal marshes have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal marshes become perched high in the tidal frame, decreasing their vulnerability to accelerated relative sea level rise (RSLR). Plant growth responses to RSLR are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of SOM decomposition to accelerated RSLR. Here we quantified the effects of flooding depth and duration on SOM decomposition by exposing planted and unplanted field-based mesocosms to experimentally manipulated relative sea level over two consecutive growing seasons. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated via δ(13) CO2 . Despite the dominant paradigm that decomposition rates are inversely related to flooding, SOM decomposition in the absence of plants was not sensitive to flooding depth and duration. The presence of plants had a dramatic effect on SOM decomposition, increasing SOM-derived CO2 flux by up to 267% and 125% (in 2012 and 2013, respectively) compared to unplanted controls in the two growing seasons. Furthermore, plant stimulation of SOM decomposition was strongly and positively related to plant biomass and in particular aboveground biomass. We conclude that SOM decomposition rates are not directly driven by relative sea level and its effect on oxygen diffusion through soil, but indirectly by plant responses to relative sea level. If this result applies more generally to tidal wetlands, it has important implications for models of SOM accumulation and surface elevation change in response to accelerated RSLR. © 2015 John Wiley & Sons Ltd.
Peng, Yan; Yang, Wanqin; Yue, Kai; Tan, Bo; Huang, Chunping; Xu, Zhenfeng; Ni, Xiangyin; Zhang, Li; Wu, Fuzhong
2018-06-17
Plant litter decomposition in forested soil and watershed is an important source of phosphorus (P) for plants in forest ecosystems. Understanding P dynamics during litter decomposition in forested aquatic and terrestrial ecosystems will be of great importance for better understanding nutrient cycling across forest landscape. However, despite massive studies addressing litter decomposition have been carried out, generalizations across aquatic and terrestrial ecosystems regarding the temporal dynamics of P loss during litter decomposition remain elusive. We conducted a two-year field experiment using litterbag method in both aquatic (streams and riparian zones) and terrestrial (forest floors) ecosystems in an alpine forest on the eastern Tibetan Plateau. By using multigroup comparisons of structural equation modeling (SEM) method with different litter mass-loss intervals, we explicitly assessed the direct and indirect effects of several biotic and abiotic drivers on P loss across different decomposition stages. The results suggested that (1) P concentration in decomposing litter showed similar patterns of early increase and later decrease across different species and ecosystems types; (2) P loss shared a common hierarchy of drivers across different ecosystems types, with litter chemical dynamics mainly having direct effects but environment and initial litter quality having both direct and indirect effects; (3) when assessing at the temporal scale, the effects of initial litter quality appeared to increase in late decomposition stages, while litter chemical dynamics showed consistent significant effects almost in all decomposition stages across aquatic and terrestrial ecosystems; (4) microbial diversity showed significant effects on P loss, but its effects were lower compared with other drivers. Our results highlight the importance of including spatiotemporal variations and indicate the possibility of integrating aquatic and terrestrial decomposition into a common framework for future construction of models that account for the temporal dynamics of P in decomposing litter. Copyright © 2018 Elsevier B.V. All rights reserved.
Order reduction, identification and localization studies of dynamical systems
NASA Astrophysics Data System (ADS)
Ma, Xianghong
In this thesis methods are developed for performing order reduction, system identification and induction of nonlinear localization in complex mechanical dynamic systems. General techniques are proposed for constructing low-order models of linear and nonlinear mechanical systems; in addition, novel mechanical designs are considered for inducing nonlinear localization phenomena for the purpose of enhancing their dynamical performance. The thesis is in three major parts. In the first part, the transient dynamics of an impulsively loaded multi-bay truss is numerically computed by employing the Direct Global Matrix (DGM) approach. The approach is applicable to large-scale flexible structures with periodicity. Karhunen-Loeve (K-L) decomposition is used to discretize the dynamics of the truss and to create the low-order models of the truss. The leading order K-L modes are recovered by an experiment, which shows the feasibility of K-L based order reduction technique. In the second part of the thesis, nonlinear localization in dynamical systems is studied through two applications. In the seismic base isolation study, it is shown that the dynamics are sensitive to the presence of nonlinear elements and that passive motion confinement can be induced under proper design. In the coupled rod system, numerical simulation of the transient dynamics shows that a nonlinear backlash spring can induce either nonlinear localization or delocalization in the form of beat phenomena. K-L decomposition and poincare maps are utilized to study the nonlinear effects. The study shows that nonlinear localization can be induced in complex structures through backlash. In the third and final part of the thesis, a new technique based on Green!s function method is proposed to identify the dynamics of practical bolted joints. By modeling the difference between the dynamics of the bolted structure and the corresponding unbolted one, one constructs a nonparametric model for the joint dynamics. Two applications are given with a bolted beam and a truss joint in order to show the applicability of the technique.
21 CFR 610.62 - Proper name; package label; legible type.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.62 Proper name; package... contrast in color value between the proper name and the background shall be at least as great as the color value between the trademark and trade name and the background. Typography, layout, contrast, and other...
NASA Astrophysics Data System (ADS)
Opsahl, Stephen; Benner, Ronald
1995-12-01
Long-term subaqueous decomposition patterns of five different vascular plant tissues including mangrove leaves and wood ( Avicennia germinans), cypress needles and wood ( Taxodium distichum) and smooth cordgrass ( Spartina alternifora) were followed for a period of 4.0 years, representing the longest litter bag decomposition study to date. All tissues decomposed under identical conditions and final mass losses were 97, 68, 86, 39, and 93%, respectively. Analysis of the lignin component of herbaceous tissues using alkaline CuO oxidation was complicated by the presence of a substantial ester-bound phenol component composed primarily of cinnamyl phenols. To overcome this problem, we introduce a new parameter to represent lignin, Λ6. Λ6 is comprised only of the six syringyl and vanillyl phenols and was found to be much less sensitive to diagenetic variation than the commonly used parameter Λ, which includes the cinnamyl phenols. Patterns of change in lignin content were strongly dependent on tissue type, ranging from 77% enrichment in smooth cordgrass to 6% depletion in cypress needles. In contrast, depletion of cutin was extensive (65-99%) in all herbaceous tissues. Despite these differences in the overall reactivity of lignin and cutin, both macromolecules were extensively degraded during the decomposition period. The long-term decomposition series also provided very useful information about the compositional parameters which are derived from the specific oxidation products of both lignin and cutin. The relative lability of ester-bound cinnamyl phenols compromised their use in parameters to distinguish woody from herbaceous plant debris. The dimer to monomer ratios of lignin-derived phenols indicated that most intermonomeric linkages in lignin degraded at similar rates. Acid to aldehyde ratios of vanillyl and syringyl phenols became elevated, particularly during the latter stages of decomposition supporting the use of these parameters as indicators of diagenetic alteration. Given the observation that cutin-derived source indicator parameters were generally more sensitive to diagenetic alteration than those of lignin, we suggest the distributional patterns of cutin-derived acids and their associated positional isomers may be most useful for tissue-specific distinctions complementing the general categorical information obtained from lignin phenol analysis alone.
NASA Astrophysics Data System (ADS)
Goebes, Philipp; Seitz, Steffen; Kühn, Peter; Scholten, Thomas
2016-04-01
Soil erosion is crucial for degradation of carbon (C) from their pools in the soil. If C of the eroded sediment and runoff are not only related to soil pools but also resulting additively from decomposition of litter cover, the system gets more complex. The role of these amounts for C cycling in a forest environment is not yet known properly and thus, the aim of this study was to investigate the role of leaf litter diversity, litter cover and soil fauna on C redistribution during interrill erosion. We established 96 runoff plots that were deployed with seven domestic leaf litter species resulting in none species (bare ground), 1-species, 2-species and 4-species mixtures. Every second runoff plot was equipped with a fauna extinction feature to investigate the role of soil meso- and macrofauna. Erosion processes were initiated using a rainfall simulator at two time steps (summer 2012 and autumn 2012) to investigate the role of leaf litter decomposition on C redistribution. C fluxes during 20 min rainfall simulation were 99.13 ± 94.98 g/m². C fluxes and C contents both were affected by soil fauna. C fluxes were higher with presence of soil fauna due to loosening and slackening of the soil surface rather than due to faster decomposition of leaves. In contrast, C contents were higher in the absence of soil fauna possibly resulting from a missing dilution effect in the top soil layer. Leaf litter diversity did not affect C fluxes, but indirectly affected C contents as it increased the soil fauna effect with higher leaf litter diversity due to superior food supply. Initial C contents in the soil mainly determined those of the eroded sediment. For future research, it will be essential to introduce a long-term decomposition experiment to get further insights into the processes of C redistribution.
Formation and characterization of mullite fibers produced by inviscid melt-spinning
NASA Astrophysics Data System (ADS)
Xiao, Zhijun
IMS is a technique used to form fibers from low viscosity melts by means of stream stabilization in a reactant gas, in this case propane. Mullite (3Alsb2Osb3*2SiOsb2) was selected as the material to be fiberized. A stable mullite melt was obtained at 2000sp°C. Some short fibers and shot were formed in the fiber forming experiments. Crucible material selection is a prerequisite for proper application of the IMS technique. The effect of two crucible materials-graphite and boron nitride were studied. A carbothermal reaction occurred between the mullite melt and the graphite crucible. Boron nitride was selected as the crucible material because a relatively stable melt could be obtained. Operating environment is another factor that affects IMS mullite fiber formation. The effects of vacuum, nitrogen and argon on mullite melting behavior were studied. Argon gas was selected as the operating environment. A 2sp3 factorial design was developed to study the effect of such variables as temperature, holding time at the temperature, and heating rate on mullite melting behavior. The effects of the variables and interactions were calculated. Temperature has the biggest positive effect, holding time is the second, heating rate just has a very small negative effect. A detailed investigation of the mullite decomposition mechanism and kinetics was conducted in this work. A solid reaction mechanism was proposed. The kinetic results and IR analysis support the proposed mechanism. The carbon source inside the furnace led to the decomposition of mullite. A feasible experimental technique was developed to prevent the decomposition of mullite. The experiments with this design completely controlled the mullite decomposition. The short fibers, shot and some side products formed in the fiber forming experiments were characterized using XRD, XRF and SEM-EDS. The composition of the short fiber and shot was in the range of mullite composition. XRD showed that the diffraction pattern of shot is that of mullite.
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Spectral decomposition of nonlinear systems with memory
NASA Astrophysics Data System (ADS)
Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.
2016-02-01
We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.
Feedback stabilization of an oscillating vertical cylinder by POD Reduced-Order Model
NASA Astrophysics Data System (ADS)
Tissot, Gilles; Cordier, Laurent; Noack, Bernd R.
2015-01-01
The objective is to demonstrate the use of reduced-order models (ROM) based on proper orthogonal decomposition (POD) to stabilize the flow over a vertically oscillating circular cylinder in the laminar regime (Reynolds number equal to 60). The 2D Navier-Stokes equations are first solved with a finite element method, in which the moving cylinder is introduced via an ALE method. Since in fluid-structure interaction, the POD algorithm cannot be applied directly, we implemented the fictitious domain method of Glowinski et al. [1] where the solid domain is treated as a fluid undergoing an additional constraint. The POD-ROM is classically obtained by projecting the Navier-Stokes equations onto the first POD modes. At this level, the cylinder displacement is enforced in the POD-ROM through the introduction of Lagrange multipliers. For determining the optimal vertical velocity of the cylinder, a linear quadratic regulator framework is employed. After linearization of the POD-ROM around the steady flow state, the optimal linear feedback gain is obtained as solution of a generalized algebraic Riccati equation. Finally, when the optimal feedback control is applied, it is shown that the flow converges rapidly to the steady state. In addition, a vanishing control is obtained proving the efficiency of the control approach.
General Relativity without paradigm of space-time covariance, and resolution of the problem of time
NASA Astrophysics Data System (ADS)
Soo, Chopin; Yu, Hoi-Lai
2014-01-01
The framework of a theory of gravity from the quantum to the classical regime is presented. The paradigm shift from full space-time covariance to spatial diffeomorphism invariance, together with clean decomposition of the canonical structure, yield transparent physical dynamics and a resolution of the problem of time. The deep divide between quantum mechanics and conventional canonical formulations of quantum gravity is overcome with a Schrödinger equation for quantum geometrodynamics that describes evolution in intrinsic time. Unitary time development with gauge-invariant temporal ordering is also viable. All Kuchar observables become physical; and classical space-time, with direct correlation between its proper times and intrinsic time intervals, emerges from constructive interference. The framework not only yields a physical Hamiltonian for Einstein's theory, but also prompts natural extensions and improvements towards a well behaved quantum theory of gravity. It is a consistent canonical scheme to discuss Horava-Lifshitz theories with intrinsic time evolution, and of the many possible alternatives that respect 3-covariance (rather than the more restrictive 4-covariance of Einstein's theory), Horava's "detailed balance" form of the Hamiltonian constraint is essentially pinned down by this framework. Issues in quantum gravity that depend on radiative corrections and the rigorous definition and regularization of the Hamiltonian operator are not addressed in this work.
Cappella, A; Castoldi, E; Sforza, C; Cattaneo, C
2014-11-01
Forensic anthropologists and pathologists are more and more requested to answer questions on bone trauma. However limitations still exist concerning the proper interpretation of bone fractures and bone lesions in general. Access to known skeletal populations which derive from cadavers (victims of violent deaths) who underwent autopsy and whose autopsy reports are available are obvious sources of information on what happens to bone trauma when subjected to taphonomic variables, such as burial, decomposition, postmortem chemical and mechanical insults; such skeletal collections are still however quite rare. This study presents the results of the comparative analysis between the autopsy findings on seven cadavers (six of which victims of blunt, sharp or gunshot wounds) and those of the anthropological assessment performed 20 years later on the exhumed dry bones (part of the Milano skeletal collection). The investigation allowed us to verify how perimortem sharp, blunt and gunshot lesions appear after a long inhumation period, whether they are still recognizable, and how many lesions are no longer detectable or were not detectable at all compared to the autopsy report. It also underlines the importance of creating skeletal collections with known information on cause of death and trauma. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Large Eddy Simulations of the Vortex-Flame Interaction in a Turbulent Swirl Burner
NASA Astrophysics Data System (ADS)
Lu, Zhen; Elbaz, Ayman M.; Hernandez Perez, Francisco E.; Roberts, William L.; Im, Hong G.
2017-11-01
A series of swirl-stabilized partially premixed flames are simulated using large eddy simulation (LES) along with the flamelet/progress variable (FPV) model for combustion. The target burner has separate and concentric methane and air streams, with methane in the center and the air flow swirled through the tangential inlets. The flame is lifted in a straight quarl, leading to a partially premixed state. By fixing the swirl number and air flow rate, the fuel jet velocity is reduced to study flame stability as the flame approaches the lean blow-off limit. Simulation results are compared against measured data, yielding a generally good agreement on the velocity, temperature, and species mass fraction distributions. The proper orthogonal decomposition (POD) method is applied on the velocity and progress variable fields to analyze the dominant unsteady flow structure, indicating a coupling between the precessing vortex core (PVC) and the flame. The effects of vortex-flame interactions on the stabilization of the lifted swirling flame are also investigated. For the stabilization of the lifted swirling flame, the effects of convection, enhanced mixing, and flame stretching introduced by the PVC are assessed based on the numerical results. This research work was sponsored by King Abdullah University of Science and Technology (KAUST) and used computational resources at KAUST Supercomputing Laboratory.
Educating Children to Proper Eating Habits in the Classroom.
ERIC Educational Resources Information Center
King, Marian
A brief discussion of proper nutrition in general precedes an examination of proper nutrition for school children and the specification of nutrition education objectives for kindergarten or first grade students. The remainder of the paper delineates food projects by which objectives can be realized (for example, snack necklace, jack-o-lantern…
1 CFR 51.9 - What is the proper language of incorporation?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false What is the proper language of incorporation... What is the proper language of incorporation? (a) The language incorporating a publication by reference... is intended and completed by the final rule document in which it appears. (b) The language...
1 CFR 51.9 - What is the proper language of incorporation?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 1 General Provisions 1 2011-01-01 2011-01-01 false What is the proper language of incorporation... What is the proper language of incorporation? (a) The language incorporating a publication by reference... is intended and completed by the final rule document in which it appears. (b) The language...
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1977-01-01
Two networks consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self delay and interference delay.
Microbial Signatures of Cadaver Gravesoil During Decomposition.
Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T
2016-04-01
Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.
Generalization of information-based concepts in forecast verification
NASA Astrophysics Data System (ADS)
Tödter, J.; Ahrens, B.
2012-04-01
This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.
Decomposition of forest products buried in landfills.
Wang, Xiaoming; Padgett, Jennifer M; Powell, John S; Barlaz, Morton A
2013-11-01
The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C+H) loss of up to 38%, while loss for the other wood types was 0-10% in most samples. The C+H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27gOCg(-1) dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported. Copyright © 2013 Elsevier Ltd. All rights reserved.
Plants Regulate Soil Organic Matter Decomposition in Response to Sea Level Rise
NASA Astrophysics Data System (ADS)
Megonigal, P.; Mueller, P.; Jensen, K.
2014-12-01
Tidal wetlands have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to their land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal wetlands become perched high in the tidal frame, decreasing their vulnerability to accelerated sea level rise. Plant growth responses to sea level rise are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of soil organic matter decomposition to rapid sea level rise. Here we quantified the effects of sea level on SOM decomposition rates by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian's Global Change Research Wetland. SOM decomposition rate was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated with a two end-member δ13C-CO2 model. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to flood duration over a 35 cm range in soil surface elevation. However, decomposition rates were strongly and positively related to aboveground biomass (R2≥0.59, p≤0.01). We conclude that soil carbon loss through decomposition is driven by plant responses to sea level in this intensively studied tidal marsh. If this result applies more generally to tidal wetlands, it has important implications for modeling soil organic matter and surface elevation change in response to accelerated sea level rise.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
NASA Technical Reports Server (NTRS)
Yabuta, H.; Williams, L.; Cody, G.; Pizzarello, S.
2005-01-01
The larger portion of the organic carbon in carbonaceous chondrites (CC) is present as a complex and heterogeneous macromolecular material that is insoluble in acids and most solvents (IOM). So far, it has been analyzed only as a whole by microscopy (TEM) and spectroscopy (IR, NMR, EPR), which have offered and overview of its chemical nature, bonding, and functional group composition. Chemical or pyrolytic decomposition has also been used in combination with GC-MS to identify individual compounds released by these processes. Their value in the recognition of the original IOM structure resides in the ability to properly interpret the decomposition pathways for any given process. We report here a preliminary study of IOM from the Murray meteorite that combines both the analytical approaches described above, under conditions that would realistically model the IOM hydrothermal exposure in the meteorite parent body. The aim is to document the possible release of water and solvent soluble organics, determine possible changes in NMR spectral features, and ascertain, by extension, the effect of this loss on the frame of the IOM residue. Additional information is included in the original extended abstract.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
Developing Chemistry and Kinetic Modeling Tools for Low-Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Beckwith, Kris; Davidson, Bradley; Kruger, Scott; Pankin, Alexei; Roark, Christine; Stoltz, Peter
2015-09-01
We discuss the use of proper orthogonal decomposition (POD) methods in VSim, a FDTD plasma simulation code capable of both PIC/MCC and fluid modeling. POD methods efficiently generate smooth representations of noisy self-consistent or test-particle PIC data, and are thus advantageous in computing macroscopic fluid quantities from large PIC datasets (e.g. for particle-based closure computations) and in constructing optimal visual representations of the underlying physics. They may also confer performance advantages for massively parallel simulations, due to the significant reduction in dataset sizes conferred by truncated singular-value decompositions of the PIC data. We also demonstrate how complex LTP chemistry scenarios can be modeled in VSim via an interface with MUNCHKIN, a developing standalone python/C++/SQL code that identifies reaction paths for given input species, solves 1D rate equations for the time-dependent chemical evolution of the system, and generates corresponding VSim input blocks with appropriate cross-sections/reaction rates. MUNCHKIN also computes reaction rates from user-specified distribution functions, and conducts principal path analyses to reduce the number of simulated chemical reactions. Supported by U.S. Department of Energy SBIR program, Award DE-SC0009501.
Development of a New Methodology for Computing Surface Sensible Heat Fluxes using Thermal Imagery
NASA Astrophysics Data System (ADS)
Morrison, T. J.; Calaf, M.; Fernando, H. J.; Price, T. A.; Pardyjak, E.
2017-12-01
Current numerical weather predication models utilize similarity to characterize momentum, moisture, and heat fluxes. Such formulations are only valid under the ideal assumptions of spatial homogeneity, statistical stationary, and zero subsidence. However, recent surface temperature measurements from the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program on the Salt Flats of Utah's West desert, show that even under the most a priori ideal conditions, heterogeneity of the aforementioned variables exists. We present a new method to extract spatially-distributed measurements of surface sensible heat flux from thermal imagery. The approach consists of using a surface energy budget, where the ground heat flux is easily computed from limited measurements using a force-restore-type methodology, the latent heat fluxes are neglected, and the energy storage is computed using a lumped capacitance model. Preliminary validation of the method is presented using experimental data acquired from a nearby sonic anemometer during the MATERHORN campaign. Additional evaluation is required to confirm the method's validity. Further decomposition analysis of on-site instrumentation (thermal camera, cold-hotwire probes, and sonic anemometers) using Proper Orthogonal Decomposition (POD), and wavelet analysis, reveals time scale similarity between the flow and surface fluctuations.
Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal
Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan
2014-01-01
This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008
Mechanical Characterization of Polysilicon MEMS: A Hybrid TMCMC/POD-Kriging Approach.
Mirzazadeh, Ramin; Eftekhar Azam, Saeed; Mariani, Stefano
2018-04-17
Microscale uncertainties related to the geometry and morphology of polycrystalline silicon films, constituting the movable structures of micro electro-mechanical systems (MEMS), were investigated through a joint numerical/experimental approach. An on-chip testing device was designed and fabricated to deform a compliant polysilicon beam. In previous studies, we showed that the scattering in the input–output characteristics of the device can be properly described only if statistical features related to the morphology of the columnar polysilicon film and to the etching process adopted to release the movable structure are taken into account. In this work, a high fidelity finite element model of the device was used to feed a transitional Markov chain Monte Carlo (TMCMC) algorithm for the estimation of the unknown parameters governing the aforementioned statistical features. To reduce the computational cost of the stochastic analysis, a synergy of proper orthogonal decomposition (POD) and kriging interpolation was adopted. Results are reported for a batch of nominally identical tested devices, in terms of measurement error-affected probability distributions of the overall Young’s modulus of the polysilicon film and of the overetch depth.
Analysis, compensation, and correction of temperature effects on FBG strain sensors
NASA Astrophysics Data System (ADS)
Haber, T. C.; Ferguson, S.; Guthrie, D.; Graver, T. W.; Soller, B. J.; Mendez, Alexis
2013-05-01
One of the most common fiber optic sensor (FOS) types used are fiber Bragg gratings (FBG), and the most frequently measured parameter is strain. Hence, FBG strain sensors are one of the most prevalent FOS devices in use today in structural sensing and monitoring in civil engineering, aerospace, marine, oil and gas, composites and smart structure applications. However, since FBGs are simultaneously sensitive to both temperature and strain, it becomes essential to utilize sensors that are either fully temperature insensitive or, alternatively, properly temperature compensated to avoid erroneous measurements. In this paper, we introduce the concept of measured "total strain", which is inherent and unique to optical strain sensors. We review and analyze the temperature and strain sensitivities of FBG strain sensors and decompose the total measured strain into thermal and non-thermal components. We explore the differences between substrate CTE and System Thermal Response Coefficients, which govern the type and quality of thermal strain decomposition analysis. Finally, we present specific guidelines to achieve proper temperature-insensitive strain measurements by combining adequate installation, sensor packaging and data correction techniques.
Andrade, Ricardo; Pascoal, Cláudia; Cássio, Fernanda
2016-07-01
Freshwater fungi play a key role in plant litter decomposition and have been used to investigate the relationships between biodiversity and ecosystem functioning in streams. Although there is evidence of positive effects of biodiversity on ecosystem processes, particularly on biomass produced, some studies have shown that neutral or negative effects may occur. We manipulated the composition and the number of species and genotypes in aquatic fungal assemblages creating different levels of genetic divergence to assess effects of fungal diversity on biomass produced and leaf decomposition. Generally, diversity effects on fungal biomass produced were positive, suggesting complementarity between species, but in assemblages with more species positive diversity effects were reduced. Genotype diversity and genetic divergence had net positive effects on leaf mass loss, but in assemblages with higher diversity leaf decomposition decreased. Our results highlight the importance of considering multiple biodiversity measures when investigating the relationship between biodiversity and ecosystem functioning. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
A general range-separated double-hybrid density-functional theory
NASA Astrophysics Data System (ADS)
Kalai, Cairedine; Toulouse, Julien
2018-04-01
A range-separated double-hybrid (RSDH) scheme which generalizes the usual range-separated hybrids and double hybrids is developed. This scheme consistently uses a two-parameter Coulomb-attenuating-method (CAM)-like decomposition of the electron-electron interaction for both exchange and correlation in order to combine Hartree-Fock exchange and second-order Møller-Plesset (MP2) correlation with a density functional. The RSDH scheme relies on an exact theory which is presented in some detail. Several semi-local approximations are developed for the short-range exchange-correlation density functional involved in this scheme. After finding optimal values for the two parameters of the CAM-like decomposition, the RSDH scheme is shown to have a relatively small basis dependence and to provide atomization energies, reaction barrier heights, and weak intermolecular interactions globally more accurate or comparable to range-separated MP2 or standard MP2. The RSDH scheme represents a new family of double hybrids with minimal empiricism which could be useful for general chemical applications.
Decomposition of forest products buried in landfills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoming, E-mail: xwang25@ncsu.edu; Padgett, Jennifer M.; Powell, John S.
Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal wastemore » components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported.« less
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketusky, E.; Subramanian, K.
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less
Optimally growing boundary layer disturbances in a convergent nozzle preceded by a circular pipe
NASA Astrophysics Data System (ADS)
Uzun, Ali; Davis, Timothy B.; Alvi, Farrukh S.; Hussaini, M. Yousuff
2017-06-01
We report the findings from a theoretical analysis of optimally growing disturbances in an initially turbulent boundary layer. The motivation behind this study originates from the desire to generate organized structures in an initially turbulent boundary layer via excitation by disturbances that are tailored to be preferentially amplified. Such optimally growing disturbances are of interest for implementation in an active flow control strategy that is investigated for effective jet noise control. Details of the optimal perturbation theory implemented in this study are discussed. The relevant stability equations are derived using both the standard decomposition and the triple decomposition. The chosen test case geometry contains a convergent nozzle, which generates a Mach 0.9 round jet, preceded by a circular pipe. Optimally growing disturbances are introduced at various stations within the circular pipe section to facilitate disturbance energy amplification upstream of the favorable pressure gradient zone within the convergent nozzle, which has a stabilizing effect on disturbance growth. Effects of temporal frequency, disturbance input and output plane locations as well as separation distance between output and input planes are investigated. The results indicate that optimally growing disturbances appear in the form of longitudinal counter-rotating vortex pairs, whose size can be on the order of several times the input plane mean boundary layer thickness. The azimuthal wavenumber, which represents the number of counter-rotating vortex pairs, is found to generally decrease with increasing separation distance. Compared to the standard decomposition, the triple decomposition analysis generally predicts relatively lower azimuthal wavenumbers and significantly reduced energy amplification ratios for the optimal disturbances.
NASA Astrophysics Data System (ADS)
Jiang, Jiaqi; Gu, Rongbao
2016-04-01
This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
NASA Astrophysics Data System (ADS)
Reina, Celia; Conti, Sergio
2017-10-01
The multiplicative decomposition of the total deformation F =FeFi between an elastic (Fe) and an inelastic component (Fi) is standard in the modeling of many irreversible processes such as plasticity, growth, thermoelasticity, viscoelasticty or phase transformations. The heuristic argument for such kinematic assumption is based on the chain rule for the compatible scenario (CurlFi = 0) where the individual deformation tensors are gradients of deformation mappings, i.e. F = D φ = D (φe ∘φi) = (Dφe) ∘φi (Dφi) =FeFi . Yet, the conditions for its validity in the general incompatible case (CurlFi ≠ 0) has so far remained uncertain. We show in this paper that detFi = 1 and CurlFi bounded are necessary and sufficient conditions for the validity of F =FeFi for a wide range of inelastic processes. In particular, in the context of crystal plasticity, we demonstrate via rigorous homogenization from discrete dislocations to the continuum level in two dimensions, that the volume preserving property of the mechanistics of dislocation glide, combined with a finite dislocation density, is sufficient to deliver F =FeFp at the continuum scale. We then generalize this result to general two-dimensional inelastic processes that may be described at a lower dimensional scale via a multiplicative decomposition while exhibiting a finite density of incompatibilities. The necessity of the conditions detFi = 1 and CurlFi bounded for such systems is demonstrated via suitable counterexamples.
... B vitamins, and found in many vitamin B complex products. Vitamin B complexes generally include vitamin B1 (thiamine), vitamin B2 (riboflavin), ... is required by our bodies to properly use carbohydrates. It also helps maintain proper nerve function.
John Leask Lumley: Whither Turbulence?
NASA Astrophysics Data System (ADS)
Leibovich, Sidney; Warhaft, Zellman
2018-01-01
John Lumley's contributions to the theory, modeling, and experiments on turbulent flows played a seminal role in the advancement of our understanding of this subject in the second half of the twentieth century. We discuss John's career and his personal style, including his love and deep knowledge of vintage wine and vintage cars. His intellectual contributions range from abstract theory to applied engineering. Here we discuss some of his major advances, focusing on second-order modeling, proper orthogonal decomposition, path-breaking experiments, research on geophysical turbulence, and important contributions to the understanding of drag reduction. John Lumley was also an influential teacher whose books and films have molded generations of students. These and other aspects of his professional career are described.
Fuzzy scalar and vector median filters based on fuzzy distances.
Chatzis, V; Pitas, I
1999-01-01
In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
Xing, W. W.; Triantafyllidis, V.
2017-01-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327
Assessment of the biological activity of soils in the subtropical zone of Azerbaijan
NASA Astrophysics Data System (ADS)
Babaev, M. P.; Orujova, N. I.
2009-10-01
The enzymatic activity; the microbial population; and the intensities of the nitrification, ammonification, CO2emission, and cellulose decomposition were studied in gray-brown, meadow-sierozemic, meadow-forest alluvial, and yellow (zheltozem) gley soils in the subtropical zone of Azerbaijan under natural vegetation, crop rotation systems with vegetables, and permanent vegetable crops. On this basis, the biological diagnostics of these soils were suggested and the soil ecological health was evaluated. It was shown that properly chosen crop rotation systems on irrigated lands make it possible to preserve the fertility of the meadow-forest alluvial and zheltozem-gley soils and to improve the fertility of the gray-brown and meadow-sierozemic soils.
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Spatially coupled catalytic ignition of CO oxidation on Pt: mesoscopic versus nano-scale
Spiel, C.; Vogel, D.; Schlögl, R.; Rupprechter, G.; Suchorski, Y.
2015-01-01
Spatial coupling during catalytic ignition of CO oxidation on μm-sized Pt(hkl) domains of a polycrystalline Pt foil has been studied in situ by PEEM (photoemission electron microscopy) in the 10−5 mbar pressure range. The same reaction has been examined under similar conditions by FIM (field ion microscopy) on nm-sized Pt(hkl) facets of a Pt nanotip. Proper orthogonal decomposition (POD) of the digitized FIM images has been employed to analyze spatiotemporal dynamics of catalytic ignition. The results show the essential role of the sample size and of the morphology of the domain (facet) boundary in the spatial coupling in CO oxidation. PMID:26021411
O'Donnell, J. A.; Turetsky, M.R.; Harden, J.W.; Manies, K.L.; Pruett, L.E.; Shetler, G.; Neff, J.C.
2009-01-01
Fire is an important control on the carbon (C) balance of the boreal forest region. Here, we present findings from two complementary studies that examine how fire modifies soil organic matter properties, and how these modifications influence rates of decomposition and C exchange in black spruce (Picea mariana) ecosystems of interior Alaska. First, we used laboratory incubations to explore soil temperature, moisture, and vegetation effects on CO2 and DOC production rates in burned and unburned soils from three study regions in interior Alaska. Second, at one of the study regions used in the incubation experiments, we conducted intensive field measurements of net ecosystem exchange (NEE) and ecosystem respiration (ER) across an unreplicated factorial design of burning (2 year post-fire versus unburned sites) and drainage class (upland forest versus peatland sites). Our laboratory study showed that burning reduced the sensitivity of decomposition to increased temperature, most likely by inducing moisture or substrate quality limitations on decomposition rates. Burning also reduced the decomposability of Sphagnum-derived organic matter, increased the hydrophobicity of feather moss-derived organic matter, and increased the ratio of dissolved organic carbon (DOC) to total dissolved nitrogen (TDN) in both the upland and peatland sites. At the ecosystem scale, our field measurements indicate that the surface organic soil was generally wetter in burned than in unburned sites, whereas soil temperature was not different between the burned and unburned sites. Analysis of variance results showed that ER varied with soil drainage class but not by burn status, averaging 0.9 ?? 0.1 and 1.4 ?? 0.1 g C m-2d-1 in the upland and peatland sites, respectively. However, a more complex general linear model showed that ER was controlled by an interaction between soil temperature, moisture, and burn status, and in general was less variable over time in the burned than in the unburned sites. Together, findings from these studies across different spatial scales suggest that although fire can create some soil climate conditions more conducive to rapid decomposition, rates of C release from soils may be constrained following fire by changes in moisture and/or substrate quality that impede rates of decomposition. ?? 2008 Springer Science+Business Media, LLC.
O'Donnell, Jonathan A.; Turetsky, Merritt R.; Harden, Jennifer W.; Manies, Kristen L.; Pruett, L.E.; Shetler, Gordon; Neff, Jason C.
2009-01-01
Fire is an important control on the carbon (C) balance of the boreal forest region. Here, we present findings from two complementary studies that examine how fire modifies soil organic matter properties, and how these modifications influence rates of decomposition and C exchange in black spruce (Picea mariana) ecosystems of interior Alaska. First, we used laboratory incubations to explore soil temperature, moisture, and vegetation effects on CO2 and DOC production rates in burned and unburned soils from three study regions in interior Alaska. Second, at one of the study regions used in the incubation experiments, we conducted intensive field measurements of net ecosystem exchange (NEE) and ecosystem respiration (ER) across an unreplicated factorial design of burning (2 year post-fire versus unburned sites) and drainage class (upland forest versus peatland sites). Our laboratory study showed that burning reduced the sensitivity of decomposition to increased temperature, most likely by inducing moisture or substrate quality limitations on decomposition rates. Burning also reduced the decomposability of Sphagnum-derived organic matter, increased the hydrophobicity of feather moss-derived organic matter, and increased the ratio of dissolved organic carbon (DOC) to total dissolved nitrogen (TDN) in both the upland and peatland sites. At the ecosystem scale, our field measurements indicate that the surface organic soil was generally wetter in burned than in unburned sites, whereas soil temperature was not different between the burned and unburned sites. Analysis of variance results showed that ER varied with soil drainage class but not by burn status, averaging 0.9 ± 0.1 and 1.4 ± 0.1 g C m−2 d−1 in the upland and peatland sites, respectively. However, a more complex general linear model showed that ER was controlled by an interaction between soil temperature, moisture, and burn status, and in general was less variable over time in the burned than in the unburned sites. Together, findings from these studies across different spatial scales suggest that although fire can create some soil climate conditions more conducive to rapid decomposition, rates of C release from soils may be constrained following fire by changes in moisture and/or substrate quality that impede rates of decomposition.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.
Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar
2017-04-22
Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
Sulfur species behavior in soil organic matter during decomposition
Schroth, A.W.; Bostick, B.C.; Graham, M.; Kaste, J.M.; Mitchell, M.J.; Friedland, A.J.
2007-01-01
Soil organic matter (SOM) is a primary re??servoir of terrestrial sulfur (S), but its role in the global S cycle remains poorly understood. We examine S speciation by X-ray absorption near-edge structure (XANES) spectroscopy to describe S species behavior during SOM decomposition. Sulfur species in SOM were best represented by organic sulfide, sulfoxide, sulfonate, and sulfate. The highest fraction of S in litter was organic sulfide, but as decomposition progressed, relative fractions of sulfonate and sulfate generally increased. Over 6-month laboratory incubations, organic sulfide was most reactive, suggesting that a fraction of this species was associated with a highly labile pool of SOM. During humification, relative concentrations of sulfoxide consistently decreased, demonstrating the importance of sulfoxide as a reactive S phase in soil. Sulfonate fractional abundance increased during humification irrespective of litter type, illustrating its relative stability in soils. The proportion of S species did not differ systematically by litter type, but organic sulfide became less abundant in conifer SOM during decomposition, while sulfate fractional abundance increased. Conversely, deciduous SOM exhibited lesser or nonexistent shifts in organic sulfide and sulfate fractions during decomposition, possibly suggesting that S reactivity in deciduous litter is coupled to rapid C mineralization and independent of S speciation. All trends were consistent in soils across study sites. We conclude that S reactivity is related to spqciation in SOM, particularly in conifer forests, and S species fractions in SOM change, during decomposition. Our data highlight the importance of intermediate valence species (sulfoxide and sulfonate) in the pedochemical cycling of organic bound S. Copyright 2007 by the American Geophysical Union.
The Spatial Variability of Organic Matter and Decomposition Processes at the Marsh Scale
NASA Astrophysics Data System (ADS)
Yousefi Lalimi, Fateme; Silvestri, Sonia; D'Alpaos, Andrea; Roner, Marcella; Marani, Marco
2017-04-01
Coastal salt marshes sequester carbon as they respond to the local Rate of Relative Sea Level Rise (RRSLR) and their accretion rate is governed by inorganic soil deposition, organic soil production, and soil organic matter (SOM) decomposition. It is generally recognized that SOM plays a central role in marsh vertical dynamics, but while existing limited observations and modelling results suggest that SOME varies widely at the marsh scale, we lack systematic observations aimed at understanding how SOM production is modulated spatially as a result of biomass productivity and decomposition rate. Marsh topography and distance to the creek can affect biomass and SOM production, while a higher topographic elevation increases drainage, evapotranspiration, aeration, thereby likely inducing higher SOM decomposition rates. Data collected in salt marshes in the northern Venice Lagoon (Italy) show that, even though plant productivity decreases in the lower areas of a marsh located farther away from channel edges, the relative contribution of organic soil production to the overall vertical soil accretion tends to remain constant as the distance from the channel increases. These observations suggest that the competing effects between biomass production and aeration/decomposition determine a contribution of organic soil to total accretion which remains approximately constant with distance from the creek, in spite of the declining plant productivity. Here we test this hypothesis using new observations of SOM and decomposition rates from marshes in North Carolina. The objective is to fill the gap in our understanding of the spatial distribution, at the marsh scale, of the organic and inorganic contributions to marsh accretion in response to RRSLR.
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
Aridity and decomposition processes in complex landscapes
NASA Astrophysics Data System (ADS)
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally decreased with increasing aridity with k going from 0.0025 day-1 on equatorial (dry) facing slopes to 0.0040 day-1 on polar (wet) facing slopes. However, differences in temperature as a result of morning vs afternoon sun on east and west aspects, respectively, (not captured in the aridity metric) resulted in poor prediction of decomposition for the sites located in the intermediate aridity range. Overall the results highlight that relatively small differences in microclimate due to slope orientation can have large effects on decomposition. Future research will aim to refine the aridity metric to better resolve small scale variation in surface temperature which is important when up-scaling decomposition processes to landscapes.
Quantum secret sharing for a general quantum access structure
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Si, Meng-Meng; Li, Yong-Ming
2017-10-01
Quantum secret sharing is a procedure for sharing a secret among a number of participants such that only certain subsets of participants can collaboratively reconstruct it, which are called authorized sets. The quantum access structure of a secret sharing is a family of all authorized sets. Firstly, in this paper, we propose the concept of decomposition of quantum access structure to design a quantum secret sharing scheme. Secondly, based on a maximal quantum access structure (MQAS) [D. Gottesman, Phys. Rev. A 61, 042311 (2000)], we propose an algorithm to improve a MQAS and obtain an improved maximal quantum access structure (IMQAS). Then, we present a sufficient and necessary condition about IMQAS, which shows the relationship between the minimal authorized sets and the players. In accordance with properties, we construct an efficient quantum secret sharing scheme with a decomposition and IMQAS. A major advantage of these techniques is that it allows us to construct a method to realize a general quantum access structure. Finally, we present two kinds of quantum secret sharing schemes via the thought of concatenation or a decomposition of quantum access structure. As a consequence, we find that the application of these techniques allows us to save more quantum shares and reduces more cost than the existing scheme.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
The Fourier decomposition method for nonlinear and non-stationary time series analysis
Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-01-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352
The generalized Hill model: A kinematic approach towards active muscle contraction
NASA Astrophysics Data System (ADS)
Göktepe, Serdar; Menzel, Andreas; Kuhl, Ellen
2014-12-01
Excitation-contraction coupling is the physiological process of converting an electrical stimulus into a mechanical response. In muscle, the electrical stimulus is an action potential and the mechanical response is active contraction. The classical Hill model characterizes muscle contraction though one contractile element, activated by electrical excitation, and two non-linear springs, one in series and one in parallel. This rheology translates into an additive decomposition of the total stress into a passive and an active part. Here we supplement this additive decomposition of the stress by a multiplicative decomposition of the deformation gradient into a passive and an active part. We generalize the one-dimensional Hill model to the three-dimensional setting and constitutively define the passive stress as a function of the total deformation gradient and the active stress as a function of both the total deformation gradient and its active part. We show that this novel approach combines the features of both the classical stress-based Hill model and the recent active-strain models. While the notion of active stress is rather phenomenological in nature, active strain is micro-structurally motivated, physically measurable, and straightforward to calibrate. We demonstrate that our model is capable of simulating excitation-contraction coupling in cardiac muscle with its characteristic features of wall thickening, apical lift, and ventricular torsion.
Localized Glaucomatous Change Detection within the Proper Orthogonal Decomposition Framework
Balasubramanian, Madhusudhanan; Kriegman, David J.; Bowd, Christopher; Holst, Michael; Weinreb, Robert N.; Sample, Pamela A.; Zangwill, Linda M.
2012-01-01
Purpose. To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). Methods. We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. Results. Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 μm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. Conclusions. With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.) PMID:22491406
The kinematic component of the cosmological redshift
NASA Astrophysics Data System (ADS)
Chodorowski, Michał J.
2011-05-01
It is widely believed that the cosmological redshift is not a Doppler shift. However, Bunn & Hogg have recently pointed out that to solve this problem properly, one has to transport parallelly the velocity four-vector of a distant galaxy to the observer's position. Performing such a transport along the null geodesic of photons arriving from the galaxy, they found that the cosmological redshift is purely kinematic. Here we argue that one should rather transport the velocity four-vector along the geodesic connecting the points of intersection of the world-lines of the galaxy and the observer with the hypersurface of constant cosmic time. We find that the resulting relation between the transported velocity and the redshift of arriving photons is not given by a relativistic Doppler formula. Instead, for small redshifts it coincides with the well-known non-relativistic decomposition of the redshift into a Doppler (kinematic) component and a gravitational one. We perform such a decomposition for arbitrary large redshifts and derive a formula for the kinematic component of the cosmological redshift, valid for any Friedman-Lemaître-Robertson-Walker (FLRW) cosmology. In particular, in a universe with Ωm= 0.24 and ΩΛ= 0.76, a quasar at a redshift 6, at the time of emission of photons reaching us today had the recession velocity v= 0.997c. This can be contrasted with v= 0.96c, had the redshift been entirely kinematic. Thus, for recession velocities of such high-redshift sources, the effect of deceleration of the early Universe clearly prevails over the effect of its relatively recent acceleration. Last but not the least, we show that the so-called proper recession velocities of galaxies, commonly used in cosmology, are in fact radial components of the galaxies' four-velocity vectors. As such, they can indeed attain superluminal values, but should not be regarded as real velocities.
NASA Astrophysics Data System (ADS)
Sengupta, Tapan K.; Gullapalli, Atchyut
2016-11-01
Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].
Catalyst for Decomposition of Nitrogen Oxides
NASA Technical Reports Server (NTRS)
Schryer, David R. (Inventor); Akyurtlu, Ates (Inventor); Jordan, Jeffrey D. (Inventor); Akyurtlu, Jale (Inventor)
2015-01-01
This invention relates generally to a platinized tin oxide-based catalyst. It relates particularly to an improved platinized tin oxide-based catalyst able to decompose nitric oxide to nitrogen and oxygen without the necessity of a reducing gas.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Ehrlinger, John; Li, Liang; Ishwaran, Hemant; Parides, Michael K
2018-01-01
Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
Zhu, Yizhou; He, Xingfeng; Mo, Yifei
2015-10-06
First-principles calculations were performed to investigate the electrochemical stability of lithium solid electrolyte materials in all-solid-state Li-ion batteries. The common solid electrolytes were found to have a limited electrochemical window. Our results suggest that the outstanding stability of the solid electrolyte materials is not thermodynamically intrinsic but is originated from kinetic stabilizations. The sluggish kinetics of the decomposition reactions cause a high overpotential leading to a nominally wide electrochemical window observed in many experiments. The decomposition products, similar to the solid-electrolyte-interphases, mitigate the extreme chemical potential from the electrodes and protect the solid electrolyte from further decompositions. With the aidmore » of the first-principles calculations, we revealed the passivation mechanism of these decomposition interphases and quantified the extensions of the electrochemical window from the interphases. We also found that the artificial coating layers applied at the solid electrolyte and electrode interfaces have a similar effect of passivating the solid electrolyte. Our newly gained understanding provided general principles for developing solid electrolyte materials with enhanced stability and for engineering interfaces in all-solid-state Li-ion batteries.« less
Tensor gauge condition and tensor field decomposition
NASA Astrophysics Data System (ADS)
Zhu, Ben-Chao; Chen, Xiang-Song
2015-10-01
We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.
The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.
Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E
2018-05-01
In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.
Diatomite filtration of water for injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olmsted, B.C. Jr.; Bell, G.R.
1966-01-01
A discussion is presented of the capabilities, problems, and answers, in the performance of diatomite filters. The discussion includes a description of diatomite filtration, new developments, design criteria, and some case histories. Diatomite filters, when properly designed and installed, and when properly applied, can provide effective clarification of waters for injection at low capital and operating costs. Design, installation, and proper application, effectiveness, and capital and operating costs can be placed in the proper perspective in the light of general experience and recent pilot plant tests in the southern California area. (30 refs.)
Double-Resonance Facilitated Decomposion of Emission Spectra
NASA Astrophysics Data System (ADS)
Kato, Ryota; Ishikawa, Haruki
2016-06-01
Emission spectra provide us with rich information about the excited-state processes such as proton-transfer, charge-transfer and so on. In the cases that more than one excited states are involved, emission spectra from different excited states sometimes overlap and a decomposition of the overlapped spectra is desired. One of the methods to perform a decomposition is a time-resolved fluorescence technique. It uses a difference in time evolutions of components involved. However, in the gas-phase, a concentration of the sample is frequently too small to carry out this method. On the other hand, double-resonance technique is a very powerful tool to discriminate or identify a common species in the spectra in the gas-phase. Thus, in the present study, we applied the double-resonance technique to resolve the overlapped emission spectra. When transient IR absorption spectra of the excited state are available, we can label the population of the certain species by the IR excitation with a proper selection of the IR wavenumbers. Thus, we can obtain the emission spectra of labeled species by subtracting the emission spectra with IR labeling from that without IR. In the present study, we chose the charge-transfer emission spectra of cyanophenyldisilane (CPDS) as a test system. One of us reported that two charge-transfer (CT) states are involved in the intramolecular charge-transfer (ICT) process of CPDS-water cluster and recorded the transient IR spectra. As expected, we have succeeded in resolving the CT emission spectra of CPDS-water cluster by the double resonance facilitated decomposion technique. In the present paper, we will report the details of the experimental scheme and the results of the decomposition of the emission spectra. H. Ishikawa, et al., Chem. Phys. Phys. Chem., 9, 117 (2007).
Application of generalized singular value decomposition to ionospheric tomography
NASA Astrophysics Data System (ADS)
Bhuyan, K.; Singh, S.; Bhuyan, P.
2004-10-01
The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD) based algorithm. Model ionospheric total electron content (TEC) data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA), in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.
Gautam, Mukesh Kumar; Lee, Kwang-Sik; Song, Byeong-Yeol; Lee, Dongho; Bong, Yeon-Sik
2016-05-01
Decomposition, nutrient, and isotopic (δ(13)C and δ(15)N) dynamics during 1 year were studied for leaf and twig litters of Pinus densiflora, Castanea crenata, Erigeron annuus, and Miscanthus sinensis growing on a highly weathered soil with constrained nutrient supply using litterbags in a cool temperate region of South Korea. Decay constant (k/year) ranged from 0.58 to 1.29/year, and mass loss ranged from 22.36 to 58.43 % among litter types. The results demonstrate that mass loss and nutrient dynamics of decomposing litter were influenced by the seasonality of mineralization and immobilization processes. In general, most nutrients exhibited alternate phases of rapid mineralization followed by gradual immobilization, except K, which was released throughout the field incubation. At the end of study, among all the nutrients only N and P showed net immobilization. Mobility of different nutrients from decomposing litter as the percentage of initial litter nutrient concentration was in the order of K > Mg > Ca > N ≈ P. The δ(13)C (0.32-6.70 ‰) and δ(15)N (0.74-3.90 ‰) values of residual litters showed nonlinear increase and decrease, respectively compared to initial isotopic values during decomposition. Litter of different functional types and chemical quality converged toward a conservative nutrient use strategy through mechanisms of slow decomposition and slow nutrient mobilization. Our results indicate that litter quality and season, are the most important regulators of litter decomposition in these forests. The results revealed significant relationships between litter decomposition rates and N, C:N ratio and P, and seasonality (temperature). These results and the convergence of different litters towards conservative nutrient use in these nutrient constrained ecosystems imply optimization of litter management because litter removal can have cascading effects on litter decomposition and nutrient availability in these systems.
Iterative methods for elliptic finite element equations on general meshes
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.; Choudhury, Shenaz
1986-01-01
Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.
Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations
NASA Technical Reports Server (NTRS)
Adomian, G.; Miao, C. C.
1973-01-01
The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.
[Analysis of women nutritional status during pregnancy--a survey].
Selwet, Monika; Machura, Mariola; Sipiński, Adam; Kuna, Anna; Kazimierczak, Małgorzata
2004-01-01
The proper diet is one of the most important factor during pregnancy. The general knowledge about proper nourishment during pregnancy allows the women to avoid quantitative and qualitative nourishment mistakes. Because of this--the salubrious education in this aspect is very important. The aim of the study is to analyze the proper nourishment during pregnancy particularly in professionally active women and those who don't work during pregnancy.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Temperature Responses of Soil Organic Matter Components With Varying Recalcitrance
NASA Astrophysics Data System (ADS)
Simpson, M. J.; Feng, X.
2007-12-01
The response of soil organic matter (SOM) to global warming remains unclear partly due to the chemical heterogeneity of SOM composition. In this study, the decomposition of SOM from two grassland soils was investigated in a one-year laboratory incubation at six different temperatures. SOM was separated into solvent- extractable compounds, suberin- and cutin-derived compounds, and lignin monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components had distinct chemical structures and recalcitrance, and their decomposition was fitted by a two-pool exponential decay model. The stability of SOM components was assessed using geochemical parameters and kinetic parameters derived from model fitting. Lignin monomers exhibited much lower decay rates than solvent-extractable compounds and a relatively low percentage of lignin monomers partitioned into the labile SOM pool, which confirmed the generally accepted recalcitrance of lignin compounds. Suberin- and cutin-derived compounds had a poor fitting for the exponential decay model, and their recalcitrance was shown by the geochemical degradation parameter which stabilized during the incubation. The aliphatic components of suberin degraded faster than cutin-derived compounds, suggesting that cutin-derived compounds in the soil may be at a higher stage of degradation than suberin- derived compounds. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of the recalcitrant lignin monomers had much higher Q10 values than soil respiration or the solvent-extractable compounds decomposition. Our study shows that the decomposition of recalcitrant SOM is highly sensitive to temperature, more so than bulk soil mineralization. This observation suggests a potential acceleration in the degradation of the recalcitrant SOM pool with global warming.
NASA Astrophysics Data System (ADS)
Mao, J.; Chen, N.; Harmon, M. E.; Li, Y.; Cao, X.; Chappell, M.
2012-12-01
Advanced 13C solid-state NMR techniques were employed to study the chemical structural changes of litter decomposition across broad spatial and long time scales. The fresh and decomposed litter samples of four species (Acer saccharum (ACSA), Drypetes glauca (DRGL), Pinus resinosa (PIRE), and Thuja plicata (THPL)) incubated for up to 10 years at four sites under different climatic conditions (from Arctic to tropical forest) were examined. Decomposition generally led to an enrichment of cutin and surface wax materials, and a depletion of carbohydrates causing overall composition to become more similar compared with original litters. However, the changes of main constituents in the four litters were inconsistent with the four litters following different pathways of decomposition at the same site. As decomposition proceeded, waxy materials decreased at the early stage and then gradually increased in PIRE; DRGL showed a significant depletion of lignin and tannin while the changes of lignin and tannin were relative small and inconsistent for ACSA and THPL. In addition, the NCH groups, which could be associated with either fungal cell wall chitin or bacterial wall petidoglycan, were enriched in all litters except THPL. Contrary to the classic lignin-enrichment hypothesis, DRGL with low-quality C substrate had the highest degree of composition changes. Furthermore, some samples had more "advanced" compositional changes in the intermediate stage of decomposition than in the highly-decomposed stage. This pattern might be attributed to the formation of new cross-linking structures, that rendered substrates more complex and difficult for enzymes to attack. Finally, litter quality overrode climate and time factors as a control of long-term changes of chemical composition.
Reactive Goal Decomposition Hierarchies for On-Board Autonomy
NASA Astrophysics Data System (ADS)
Hartmann, L.
2002-01-01
As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.
Optimization of design parameters of low-energy buildings
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2017-07-01
Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).
Coercivity enhancement of anisotropic Dy-free Nd-Fe-B powders by conventional HDDR process
NASA Astrophysics Data System (ADS)
Morimoto, K.; Katayama, N.; Akamine, H.; Itakura, M.
2012-11-01
Coercivity enhancement of Dy-free Nd-Fe-Co-B-Ga-Zr powders was studied using the conventional hydrogenation-decomposition-desorption-recombination (HDDR) process. It was found that the addition of Al together with the proper Nd content and the slow hydrogen desorption of the HDDR treatment can induce high coercivity in the powder. For example, the 14.0 at% Nd-2.0 at% Al powder exhibits HcJ of 1560 kA/m, Br of 1.22 T, and (BH)max of 257 kJ/m3. The high coercivity inducement of the powder is thought to be attributed to the formation of Nd-rich phase, which continuously surrounds fine Nd2Fe14B grains.
NASA Astrophysics Data System (ADS)
Shao, Zhiqiang
2018-04-01
The relativistic full Euler system with generalized Chaplygin proper energy density-pressure relation is studied. The Riemann problem is solved constructively. The delta shock wave arises in the Riemann solutions, provided that the initial data satisfy some certain conditions, although the system is strictly hyperbolic and the first and third characteristic fields are genuinely nonlinear, while the second one is linearly degenerate. There are five kinds of Riemann solutions, in which four only consist of a shock wave and a centered rarefaction wave or two shock waves or two centered rarefaction waves, and a contact discontinuity between the constant states (precisely speaking, the solutions consist in general of three waves), and the other involves delta shocks on which both the rest mass density and the proper energy density simultaneously contain the Dirac delta function. It is quite different from the previous ones on which only one state variable contains the Dirac delta function. The formation mechanism, generalized Rankine-Hugoniot relation and entropy condition are clarified for this type of delta shock wave. Under the generalized Rankine-Hugoniot relation and entropy condition, we establish the existence and uniqueness of solutions involving delta shocks for the Riemann problem.
Bird mortality associated with wind turbines at the Buffalo Ridge wind resource area, Minnesota
Osborn, R.G.; Higgins, K.F.; Usgaard, R.E.; Dieter, C.D.; Neiger, R.D.
2000-01-01
Recent technological advances have made wind power a viable source of alternative energy production and the number of windplant facilities has increased in the United States. Construction was completed on a 73 turbine, 25 megawatt windplant on Buffalo Ridge near Lake Benton, Minnesota in Spring 1994. The number of birds killed at existing windplants in California caused concern about the potential impacts of the Buffalo Ridge facility on the avian community. From April 1994 through Dec. 1995 we searched the Buffalo Ridge windplant site for dead birds. Additionally, we evaluated search efficiency, predator scavenging rates and rate of carcass decomposition. During 20 mo of monitoring we found 12 dead birds. Collisions with wind turbines were suspected for 8 of the 12 birds. During observer efficiency trials searchers found 78.8% of carcasses. Scavengers removed 39.5% of carcasses during scavenging trials. All carcasses remained recognizable during 7 d decomposition trials. After correction for biases we estimated that approximately 36 ?? 12 birds (<1 dead bird per turbine) were killed at the Buffalo Ridge windplant in 1 y. Although windplants do not appear to be more detrimental to birds than other man-made structures, proper facility sitting is an important first consideration in order to avoid unnecessary fatalities.
NASA Astrophysics Data System (ADS)
Feng, Mingbao; Qu, Ruijuan; Wei, Zhongbo; Wang, Liansheng; Sun, Ping; Wang, Zunyao
2015-05-01
The thermal decomposition of Nafion N117 membrane, a typical perfluorosulfonic acid membrane that is widely used in various chemical technologies, was investigated in this study. Structural identification of thermolysis products in water and methanol was performed using liquid chromatography-electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS). The fluoride release was studied using an ion-chromatography system, and the membrane thermal stability was characterized by thermogravimetric analysis. Notably, several types of perfluorinated compounds (PFCs) including perfluorocarboxylic acids were detected and identified. Based on these data, a thermolysis mechanism was proposed involving cleavage of both the polymer backbone and its side chains by attack of radical species. This is the first systematic report on the thermolysis products of Nafion by simulating its high-temperature operation and disposal process via incineration. The results of this study indicate that Nafion is a potential environmental source of PFCs, which have attracted growing interest and concern in recent years. Additionally, this study provides an analytical justification of the LC/ESI-MS/MS method for characterizing the degradation products of polymer electrolyte membranes. These identifications can substantially facilitate an understanding of their decomposition mechanisms and offer insight into the proper utilization and effective management on these membranes.
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Differential Decomposition of Bacterial and Viral Fecal ...
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water qualitymanagement practices, as well as predicting associated public health risks. Here, thedecomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated geneticindicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linearregression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbio
An improved algorithm for balanced POD through an analytic treatment of impulse response tails
NASA Astrophysics Data System (ADS)
Tu, Jonathan H.; Rowley, Clarence W.
2012-06-01
We present a modification of the balanced proper orthogonal decomposition (balanced POD) algorithm for systems with simple impulse response tails. In this new method, we use dynamic mode decomposition (DMD) to estimate the slowly decaying eigenvectors that dominate the long-time behavior of the direct and adjoint impulse responses. This is done using a new, low-memory variant of the DMD algorithm, appropriate for large datasets. We then formulate analytic expressions for the contribution of these eigenvectors to the controllability and observability Gramians. These contributions can be accounted for in the balanced POD algorithm by simply appending the impulse response snapshot matrices (direct and adjoint, respectively) with particular linear combinations of the slow eigenvectors. Aside from these additions to the snapshot matrices, the algorithm remains unchanged. By treating the tails analytically, we eliminate the need to run long impulse response simulations, lowering storage requirements and speeding up ensuing computations. To demonstrate its effectiveness, we apply this method to two examples: the linearized, complex Ginzburg-Landau equation, and the two-dimensional fluid flow past a cylinder. As expected, reduced-order models computed using an analytic tail match or exceed the accuracy of those computed using the standard balanced POD procedure, at a fraction of the cost.
Dry Chemical Development - A Model for the Extinction of Hydrocarbon Flames.
1984-02-08
and predicts the suppression effectiveness of a wide variety of gaseous, liquid, and solid agents . The flame extinguishment model is based on the...generalized by consideration of all endothermic reaction sinks, eg., vaporization, dissociation, and decomposition. The general equation correlates...CHEMICAL DEVELOPMENT - A MODEL FOR THE EXTINCTION OF HYDROCARBON FLAMES Various fire-extinguishing agents are carried on board Navy ships to control
3D modeling of squeeze flow of unidirectionally thermoplastic composite inserts
NASA Astrophysics Data System (ADS)
Ghnatios, Chady; Abisset-Chavanne, Emmanuelle; Binetruy, Christophe; Chinesta, Francisco; Advani, Suresh
2016-10-01
Thermoplastic composites are attractive because they can be recycled and exhibit superior mechanical properties. The ability of thermoplastic resin to melt and solidify allows for fast and cost-effective manufacturing processes, which is a crucial property for high volume production. Thermoplastic composite parts are usually obtained by stacking several prepreg plies to create a laminate with a particular orientation sequence to meet design requirements. During the consolidation and forming process, the thermoplastic laminate is subjected to complex deformation which can include intraply and/or interply shear, ply reorientation and squeeze flow. In the case of unidirectional prepregs, the ply constitutive equation, when elastic effects are neglected, can be modeled as a transversally isotropic fluid, that must satisfy the fiber inextensibility as well as the fluid incompressibility. The high-fidelity solution of the squeeze flow in laminates composed of unidirectional prepregs was addressed in our former works by making use of an in-plane-out-of-plane separated representation allowing a very detailed resolution of the involved fields throughout the laminate thickness. In the present work prepregs plies are supposed of limited dimensions compared to the in-plane dimension of the part and will be named inserts. Again within the Proper Generalized Decomposition framework high-resolution simulation of the squeeze flow occurring during consolidation is addressed within a fully 3D in-plane-out-of-plane separated representation.
A Possible Regenerative, Molten-Salt, Thermoelectric Fuel Cell
NASA Technical Reports Server (NTRS)
Greenberg, Jacob; Thaller, Lawrence H.; Weber, Donald E.
1964-01-01
Molten or fused salts have been evaluated as possible thermoelectric materials because of the relatively good values of their figures of merit, their chemical stability, their long liquid range, and their ability to operate in conjunction with a nuclear reactor to produce heat. In general, molten salts are electrolytic conductors; therefore, there will be a transport of materials and subsequent decomposition with the passage of an electric current. It is possible nonetheless to overcome this disadvantage by using the decomposition products of the molten-salt electrolyte in a fuel cell. The combination of a thermoelectric converter and a fuel cell would lead to a regenerative system that may be useful.
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
Stock, Christoph; Heureux, Nicolas; Browne, Wesley R; Feringa, Ben L
2008-01-01
A general approach for the easy functionalization of bare silica and glass surfaces with a synthetic manganese catalyst is reported. Decomposition of H(2)O(2) by this dinuclear metallic center into H(2)O and O(2) induced autonomous movement of silica microparticles and glass micro-sized fibers. Although several mechanisms have been proposed to rationalise movement of particles driven by H(2)O(2) decomposition to O(2) and water (recoil from O(2) bubbles, ([36,45]) interfacial tension gradient([37-42]), it is apparent in the present system that ballistic movement is due to the growth of O(2) bubbles.
New non-naturally reductive Einstein metrics on exceptional simple Lie groups
NASA Astrophysics Data System (ADS)
Chen, Huibin; Chen, Zhiqi; Deng, Shaoqiang
2018-01-01
In this article, we construct several non-naturally reductive Einstein metrics on exceptional simple Lie groups, which are found through the decomposition arising from generalized Wallach spaces. Using the decomposition corresponding to the two involutions, we calculate the non-zero coefficients in the formulas of the components of Ricci tensor with respect to the given metrics. The Einstein metrics are obtained as solutions of a system of polynomial equations, which we manipulate by symbolic computations using Gröbner bases. In particular, we discuss the concrete numbers of non-naturally reductive Einstein metrics for each case up to isometry and homothety.
Genten: Software for Generalized Tensor Decompositions v. 1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel
Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.
The moments of inertia of Mars
NASA Technical Reports Server (NTRS)
Bills, Bruce G.
1989-01-01
The mean moment of inertia of Mars is, at present, very poorly constrained. The generally accepted value of 0.365 M(R-squared) is obtained by assuming that the observed second degree gravity field can be decomposed into a hydrostatic oblate spheroid and a nonhydrostatic prolate spheroid with an equatorial axis of symmetry. An alternative decomposition is advocated in the present analysis. If the nonhydrostatic component is a maximally triaxial ellipsoid (intermediate moment exactly midway between greatest and least), the hydrostatic component is consistent with a mean moment of 0.345 M(R-squared). The plausibility of this decomposition is supported by statistical arguments and comparison with the earth, moon and Venus.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
On squares of representations of compact Lie algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeier, Robert, E-mail: robert.zeier@ch.tum.de; Zimborás, Zoltán, E-mail: zimboras@gmail.com
We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the summore » of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems.« less
The Roadmaker's algorithm for the discrete pulse transform.
Laurie, Dirk P
2011-02-01
The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker's algorithm has optimal complexity: it runs in time O(m), where m is the number of arcs in the graph.
[The life cycle of general practitioners' professional motivations: the case of prevention].
Videau, Y; Batifoulier, P; Arrighi, Y; Gadreau, M; Ventelou, B
2010-10-01
The analysis of "professional motivations", mainly through the possible crowding-out effects between extrinsic and intrinsic motivations, has become an issue of great concern in the economic literature. This paper aims at applying this topic to the healthcare professions where the proper scaling up of pay-for-performance (P4P) policies by public authorities is at stake. We used a panel of 528 self-employed general practitioners in the "Provence-Alpes-Côte d'Azur" region in France to provide an interpersonal statistical decomposition between extrinsic and intrinsic motivations with regard to preventive actions. Then, we applied a Tobit model in order to specify the main explicative variables of the share of intrinsic motivations entering into physicians' total motivations. The relative share of intrinsic motivations was quite high among physicians paid with fixed fees. We found a significant effect of age on intrinsic motivations describing a U-shaped curve which can be interpreted as being the result of a "life cycle of medical motivations" or a generational effect. The cross-sectional nature of the data does not allow us to draw any conclusions concerning the predominance of the generational effect or the "life cycle effect" on the evolution of the relative share of physician's intrinsic motivations. Nevertheless, the U-shaped relation between intrinsic motivations and age questions the suitability of using uniformly P4P mechanisms. The generations or age groups of self-employed physicians who seem to be less responsive to extrinsic motivations are more likely to favour the introduction of other types of payment schemes (capitation or salary systems) or regulation tools such as clinical practice guidelines. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
By Hand or Not By-Hand: A Case Study of Alternative Approaches to Parallelize CFD Applications
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Bailey, David (Technical Monitor)
1997-01-01
While parallel processing promises to speed up applications by several orders of magnitude, the performance achieved still depends upon several factors, including the multiprocessor architecture, system software, data distribution and alignment, as well as the methods used for partitioning the application and mapping its components onto the architecture. The existence of the Gorden Bell Prize given out at Supercomputing every year suggests that while good performance can be attained for real applications on general purpose multiprocessors, the large investment in man-power and time still has to be repeated for each application-machine combination. As applications and machine architectures become more complex, the cost and time-delays for obtaining performance by hand will become prohibitive. Computer users today can turn to three possible avenues for help: parallel libraries, parallel languages and compilers, interactive parallelization tools. The success of these methodologies, in turn, depends on proper application of data dependency analysis, program structure recognition and transformation, performance prediction as well as exploitation of user supplied knowledge. NASA has been developing multidisciplinary applications on highly parallel architectures under the High Performance Computing and Communications Program. Over the past six years, the transition of underlying hardware and system software have forced the scientists to spend a large effort to migrate and recede their applications. Various attempts to exploit software tools to automate the parallelization process have not produced favorable results. In this paper, we report our most recent experience with CAPTOOL, a package developed at Greenwich University. We have chosen CAPTOOL for three reasons: 1. CAPTOOL accepts a FORTRAN 77 program as input. This suggests its potential applicability to a large collection of legacy codes currently in use. 2. CAPTOOL employs domain decomposition to obtain parallelism. Although the fact that not all kinds of parallelism are handled may seem unappealing, many NASA applications in computational aerosciences as well as earth and space sciences are amenable to domain decomposition. 3. CAPTOOL generates code for a large variety of environments employed across NASA centers: MPI/PVM on network of workstations to the IBS/SP2 and CRAY/T3D.
Universality of Schmidt decomposition and particle identity
NASA Astrophysics Data System (ADS)
Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe
2017-03-01
Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.
Universality of Schmidt decomposition and particle identity
Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe
2017-01-01
Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource. PMID:28333163
Universality of Schmidt decomposition and particle identity.
Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe
2017-03-23
Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.
Host-induced bacterial cell wall decomposition mediates pattern-triggered immunity in Arabidopsis
Liu, Xiaokun; Grabherr, Heini M; Willmann, Roland; Kolb, Dagmar; Brunner, Frédéric; Bertsche, Ute; Kühner, Daniel; Franz-Wachtel, Mirita; Amin, Bushra; Felix, Georg; Ongena, Marc; Nürnberger, Thorsten; Gust, Andrea A
2014-01-01
Peptidoglycans (PGNs) are immunogenic bacterial surface patterns that trigger immune activation in metazoans and plants. It is generally unknown how complex bacterial structures such as PGNs are perceived by plant pattern recognition receptors (PRRs) and whether host hydrolytic activities facilitate decomposition of bacterial matrices and generation of soluble PRR ligands. Here we show that Arabidopsis thaliana, upon bacterial infection or exposure to microbial patterns, produces a metazoan lysozyme-like hydrolase (lysozyme 1, LYS1). LYS1 activity releases soluble PGN fragments from insoluble bacterial cell walls and cleavage products are able to trigger responses typically associated with plant immunity. Importantly, LYS1 mutant genotypes exhibit super-susceptibility to bacterial infections similar to that observed on PGN receptor mutants. We propose that plants employ hydrolytic activities for the decomposition of complex bacterial structures, and that soluble pattern generation might aid PRR-mediated immune activation in cell layers adjacent to infection sites. DOI: http://dx.doi.org/10.7554/eLife.01990.001 PMID:24957336
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
Biogeochemistry of Decomposition and Detrital Processing
NASA Astrophysics Data System (ADS)
Sanderman, J.; Amundson, R.
2003-12-01
Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems (such as erosion), weathering of primary minerals (3), loss of secondary minerals (4), atmospheric deposition and N-fixation (5) and volatilization (6), the majority of plant-available nutrients are supplied by internal recycling through decomposition. Nutrients that are taken up by plants (7) are either consumed by fauna (8) and returned to the soil through defecation and mortality (10) or returned to the soil through litterfall and mortality (9). Detritus and humus can be immobilized into microbial biomass (11 and 13). Humus is formed by the transformation and stabilization of detrital (12) and microbial (14) compounds. During these transformations, SOM is being continually mineralized by the microorganisms (15) replenishing the inorganic nutrient pool (after Swift et al., 1979). The second major ecosystem role of decomposition is in the formation and stabilization of humus. The cycling and stabilization of SOM in the litter-soil system is presented in a conceptual model in Figure 2. Parallel with litterfall and most root turnover, detrital processing is concentrated at or near the soil surface. As labile SOM is preferentially degraded, there is a progressive shift from labile to passive SOM with increasing depth. There are three basic mechanisms for SOM accumulation in the mineral soil: bioturbation or physical mixing of the soil by burrowing animals (e.g., earthworms, gophers, etc.), in situ decomposition of roots and root exudates, and the leaching of soluble organic compounds. In the absence of bioturbation, distinct litter layers often accumulate above the mineral soil. In grasslands where the majority of net primary productivity (NPP) is allocated belowground, root inputs will dominate. In sandy soils with ample rainfall, leaching may be the major process incorporating carbon into the soil. (11K)Figure 2. Conceptual model of carbon cycling in the litter-soil system. In each horizon or depth increment, SOM is represented by three pools: labile SOM, slow SOM, and passive SOM. Inputs include aboveground litterfall and belowground root turnover and exudates, which will be distributed among the pools based on the biochemical nature of the material. Outputs from each pool include mineralization to CO2 (dashed lines), humification (labile→slow→passive), and downward transport due to leaching and physical mixing. Communition by soil fauna will accelerate the decomposition process and reveal previously inaccessible materials. Soil mixing and other disturbances can also make physically protected passive SOM available to microbial attack (passive→slow). There exists an amazing body of literature on the subject of decomposition that draws from many disciplines - including ecology, soil science, microbiology, plant physiology, biochemistry, and zoology. In this chapter, we have attempted to draw information from all of these fields to present an integrated analysis of decomposition in a biogeochemical context. We begin by reviewing the composition of detrital resources and SOM (Section 8.07.2), the organisms responsible for decomposition ( Section 8.07.3), and some methods for quantifying decomposition rates ( Section 8.07.4). This is followed by a discussion of the mechanisms behind decomposition ( Section 8.07.5), humification ( Section 8.07.6), and the controls on these processes ( Section 8.07.7). We conclude the chapter with a brief discussion on how current biogeochemical models incorporate this information ( Section 8.07.8).
The Variability of Internal Tides in the Northern South China Sea
2013-08-27
mean N(z) profile from the climatology dataset provided by the Generalized Digital Environmental Model ( GDEM ) (Teague et al. 1990) (Fig. 2). Eigenmode...decomposed eigenmodes have similar magnitude. The GDEM profiles for the eigenmode decomposition are used for this analysis because the profiles from...provided by the Generalized Digital Environmental Model ( GDEM ) and the shading represents one standard deviation. b Vertical structures of the first
ERIC Educational Resources Information Center
Fernandez-Balboa, Juan-Miguel
1993-01-01
Secondary level physical educators must be sure to instruct their weight lifters in proper spotting and lifting procedures, because weight training carries a high risk of injury. The article explains how to check the equipment, spot properly for specific exercises, and take general safety precautions in the weight room. (SM)
NASA Astrophysics Data System (ADS)
Clayton, J. D.
2017-02-01
A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.
Biological decomposition efficiency in different woodland soils.
Herlitzius, H
1983-03-01
The decomposition (meaning disappearance) of different leaf types and artificial leaves made from cellulose hydrate foil was studied in three forests - an alluvial forest (Ulmetum), a beech forest on limestone soil (Melico-Fagetum), and a spruce forest in soil overlying limestone bedrock.Fine, medium, and coarse mesh litter bags of special design were used to investigate the roles of abiotic factors, microorganisms, and meso- and macrofauna in effecting decomposition in the three habitats. Additionally, the experimental design was carefully arranged so as to provide information about the effects on decomposition processes of the duration of exposure and the date or moment of exposure. 1. Exposure of litter samples oor 12 months showed: a) Litter enclosed in fine mesh bags decomposed to some 40-44% of the initial amount placed in each of the three forests. Most of this decomposition can be attributed to abiotic factors and microoganisms. b) Litter placed in medium mesh litter bags reduced by ca. 60% in alluvial forest, ca. 50% in beech forest and ca. 44% in spruce forest. c) Litter enclosed in coarse mesh litter bags was reduced by 71% of the initial weights exposed in alluvial and beech forests; in the spruce forest decomposition was no greater than observed with fine and medium mesh litter bags. Clearly, in spruce forest the macrofauna has little or no part to play in effecting decomposition. 2. Sequential month by month exposure of hazel leaves and cellulose hydrate foil in coarse mesh litter bags in all three forests showed that one month of exposure led to only slight material losses, they did occur smallest between March and May, and largest between June and October/November. 3. Coarse mesh litter bags containing either hazel or artificial leaves of cellulose hydrate foil were exposed to natural decomposition processes in December 1977 and subsampled monthly over a period of one year, this series constituted the From-sequence of experiments. Each of the From-sequence samples removed was immediately replaced by a fresh litter bag which was left in place until December 1978, this series constituted the To-sequence of experiments. The results arising from the designated From- and To-sequences showed: a) During the course of one year hazel leaves decomposed completely in alluvial forest, almost completely in beech forest but to only 50% of the initial value in spruce forest. b) Duration of exposure and not the date of exposure is the major controlling influence on decomposition in alluvial forest, a characteristic reflected in the mirror-image courses of the From- and To-sequences curves with respect to the abscissa or time axis. Conversely the date of exposure and not the duration of exposure is the major controlling influence on decomposition in the spruce forest, a characteristic reflected in the mirror-image courses of the From-and To-sequences with respect to the ordinate or axis of percentage decomposition. c) Leaf powder amendment increased the decomposition rate of the hazel and cellulose hydrate leaves in the spruce forest but had no significant effect on their decomposition rate in alluvial and beech forests. It is concluded from this, and other evidence, that litter amendment by leaf fragments of phytophage frass in sites of low biological decomposition activity (eg. spruce) enhances decomposition processes. d) The time course of hazel leaf decomposition in both alluvial and beech forest is sigmoidal. Three s-phases are distinguished and correspond to the activity of microflora/microfauna, mesofauna/macrofauna, and then microflora/microfauna again. In general, the sigmoidal pattern of the curve can be considered valid for all decomposition processes occurring in terrestrial situations. It is contended that no decomposition (=disappearance) curve actually follows an e-type exponential function. A logarithmic linear regression can be constructed from the sigmoid curve data and although this facilitates inter-system comparisons it does not clearly express the dynamics of decomposition. 4. The course of the curve constructed from information about the standard deviations of means derived from the From- and To-sequence data does reflect the dynamics of litter decomposition. The three s-phases can be recognised and by comparing the actual From-sequence deviation curve with a mirror inversion representation of the To-sequence curve it is possible to determine whether decomposition is primarily controlled by the duration of exposure or the date of exposure. As is the case for hazel leaf decomposition in beech forest intermediate conditions can be readily recognised.
Oil shale combustor model developed by Greek researchers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-09-01
Work carried out in the Department of Chemical Engineering at the University of Thessaloniki, Thessaloniki, Greece has resulted in a model for the combustion of retorted oil shale in a fluidized bed combustor. The model is generally applicable to any hot-solids retorting process, whereby raw oil shale is retorted by mixing with a hot solids stream (usually combusted spent shale), and then the residual carbon is burned off the spent shale in a fluidized bed. Based on their modelling work, the following conclusions were drawn by the researchers. (1) For the retorted particle size distribution selected (average particle diameter 1600more » microns) complete carbon conversion is feasible at high pressures (2.7 atmosphere) and over the entire temperature range studied (894 to 978 K). (2) Bubble size was found to have an important effect, especially at conditions where reaction rates are high (high temperature and pressure). (3) Carbonate decomposition increases with combustor temperature and residence time. Complete carbon conversion is feasible at high pressures (2.7 atmosphere) with less than 20 percent carbonate decomposition. (4) At the preferred combustor operating conditions (high pressure, low temperature) the main reaction is dolomite decomposition while calcite decomposition is negligible. (5) Recombination of CO/sub 2/ with MgO occurs at low temperatures, high pressures, and long particle residence times.« less
Atomic-batched tensor decomposed two-electron repulsion integrals
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-01
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
NASA Astrophysics Data System (ADS)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-09-01
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
NASA Astrophysics Data System (ADS)
Jones, S.; Zwart, J. A.; Solomon, C.; Kelly, P. T.
2017-12-01
Current efforts to scale lake carbon biogeochemistry rely heavily on empirical observations and rarely consider physical or biological inter-lake heterogeneity that is likely to regulate terrestrial dissolved organic carbon (tDOC) decomposition in lakes. This may in part result from a traditional focus of lake ecologists on in-lake biological processes OR physical-chemical pattern across lake regions, rather than on process AND pattern across scales. To explore the relative importance of local biological processes and physical processes driven by lake hydrologic setting, we created a simple, analytical model of tDOC decomposition in lakes that focuses on the regulating roles of lake size and catchment hydrologic export. Our simplistic model can generally recreate patterns consistent with both local- and regional-scale patterns in tDOC concentration and decomposition. We also see that variation in lake hydrologic setting, including the importance of evaporation as a hydrologic export, generates significant, emergent variation in tDOC decomposition at a given hydrologic residence time, and creates patterns that have been historically attributed to variation in tDOC quality. Comparing predictions of this `biologically null model' to field observations and more biologically complex models could indicate when and where biology is likely to matter most.
Atomic-batched tensor decomposed two-electron repulsion integrals.
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-07
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
NASA Technical Reports Server (NTRS)
Cowen, Jonathan E.; Hepp, Aloysius F.; Duffy, Norman V.; Jose, Melanie J.; Choi, D. B.; Brothers, Scott M.; Baird, Michael F.; Tomsik, Thomas M.; Duraj, Stan A.; Williams, Jennifer N.;
2009-01-01
We describe several related studies where simple iron, nickel, and cobalt complexes were prepared, decomposed, and characterized for aeronautics (Fischer-Tropsch catalysts) and space (high-fidelity lunar regolith simulant additives) applications. We describe the synthesis and decomposition of several new nickel dithiocarbamate complexes. Decomposition resulted in a somewhat complicated product mix with NiS predominating. The thermogravimetric analysis of fifteen tris(diorganodithiocarbamato)iron(III) has been investigated. Each undergoes substantial mass loss upon pyrolysis in a nitrogen atmosphere between 195 and 370 C, with major mass losses occurring between 279 and 324 C. Steric repulsion between organic substituents generally decreased the decomposition temperature. The product of the pyrolysis was not well defined, but usually consistent with being either FeS or Fe2S3 or a combination of these. Iron nanoparticles were grown in a silica matrix with a long-term goal of introducing native iron into a commercial lunar dust simulant in order to more closely simulate actual lunar regolith. This was also one goal of the iron and nickel sulfide studies. Finally, cobalt nanoparticle synthesis is being studied in order to develop alternatives to crude processing of cobalt salts with ceramic supports for Fischer-Tropsch synthesis.
OH maser proper motions in Cepheus A
NASA Astrophysics Data System (ADS)
Migenes, V.; Cohen, R. J.; Brebner, G. C.
1992-02-01
MERLIN measurements made between 1982 and 1989 reveal proper motions of OH masers in the source Cepheus A. The proper motions are typically a few milliarcsec per year, and are mainly directed away from the central H II regions. Statistical analysis of the data suggests an expansion time-scale of some 300 yr. The distance of the source implied by the proper motions is 320+140/-80 pc, assuming that the expansion is isotropic. The proper motions can be reconciled with the larger distance of 730 pc which is generally accepted, provided that the masers are moving at large angles to the line of sight. The expansion time-scale agrees with that of the magnetic field decay recently reported by Cohen, et al. (1990).
26 CFR 1.1016-2 - Items properly chargeable to capital account.
Code of Federal Regulations, 2010 CFR
2010-04-01
....1016-2 Section 1.1016-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Basis Rules of General Application § 1.1016-2 Items properly... for $10,000. He subsequently expended $6,000 for improvements. Disregarding, for the purpose of this...
Global Existence Results for Viscoplasticity at Finite Strain
NASA Astrophysics Data System (ADS)
Mielke, Alexander; Rossi, Riccarda; Savaré, Giuseppe
2018-01-01
We study a model for rate-dependent gradient plasticity at finite strain based on the multiplicative decomposition of the strain tensor, and investigate the existence of global-in-time solutions to the related PDE system. We reveal its underlying structure as a generalized gradient system, where the driving energy functional is highly nonconvex and features the geometric nonlinearities related to finite-strain elasticity as well as the multiplicative decomposition of finite-strain plasticity. Moreover, the dissipation potential depends on the left-invariant plastic rate, and thus depends on the plastic state variable. The existence theory is developed for a class of abstract, nonsmooth, and nonconvex gradient systems, for which we introduce suitable notions of solutions, namely energy-dissipation-balance and energy-dissipation-inequality solutions. Hence, we resort to the toolbox of the direct method of the calculus of variations to check that the specific energy and dissipation functionals for our viscoplastic models comply with the conditions of the general theory.
Complex Chern-Simons Theory at Level k via the 3d-3d Correspondence
NASA Astrophysics Data System (ADS)
Dimofte, Tudor
2015-10-01
We use the 3d-3d correspondence together with the DGG construction of theories T n [ M] labelled by 3-manifolds M to define a non-perturbative state-integral model for Chern-Simons theory at any level k, based on ideal triangulations. The resulting partition functions generalize a widely studied k = 1 state-integral, as well as the 3d index, which is k = 0. The Chern-Simons partition functions correspond to partition functions of T n [ M] on squashed lens spaces L( k, 1). At any k, they admit a holomorphic-antiholomorphic factorization, corresponding to the decomposition of L( k, 1) into two solid tori, and the associated holomorphic block decomposition of the partition functions of T n [ M]. A generalization to L( k, p) is also presented. Convergence of the state integrals, for any k, requires triangulations to admit a positive angle structure; we propose that this is also necessary for the DGG gauge theory T n [ M] to flow to a desired IR SCFT.
A general melt-injection-decomposition route to oriented metal oxide nanowire arrays
NASA Astrophysics Data System (ADS)
Han, Dongqiang; Zhang, Xinwei; Hua, Zhenghe; Yang, Shaoguang
2016-12-01
In this manuscript, a general melt-injection-decomposition (MID) route has been proposed and realized for the fabrication of oriented metal oxide nanowire arrays. Nitrate was used as the starting materials, which was injected into the nanopores of the anodic aluminum oxide (AAO) membrane through the capillarity action in its liquid state. At higher temperature, the nitrate decomposed into corresponding metal oxide within the nanopores of the AAO membrane. Oriented metal oxide nanowire arrays were formed within the AAO membrane as a result of the confinement of the nanopores. Four kinds of metal oxide (CuO, Mn2O3, Co3O4 and Cr2O3) nanowire arrays are presented here as examples fabricated by this newly developed process. X-ray diffraction, scanning electron microscopy and transmission electron microscopy studies showed clear evidence of the formations of the oriented metal oxide nanowire arrays. Formation mechanism of the metal oxide nanowire arrays is discussed based on the Thermogravimetry and Differential Thermal Analysis measurement results.
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Olson, J. G.; Gelman, M. E.
1984-01-01
This report presents four year averages of monthly mean Northern Hemisphere general circulation statistics for the period from 1 December 1978 through 30 November 1982. Computations start with daily maps of temperature for 18 pressure levels between 1000 and 0.4 mb that were supplied by NOAA/NMC. Geopotential height and geostrophic wind are constructed using the hydrostatic and geostrophic formulae. Fields presented in this report are zonally averaged temperature, mean zonal wind, and amplitude and phase of the planetary waves in geopotential height with zonal wavenumbers 1-3. The northward fluxes of heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large annual and interannual variations are found in each quantity especially in the stratosphere in accordance with the changes in the planetary wave activity. The results are shown both in graphic and tabular form.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Kuzhel, Sergii
2010-10-01
Gauged \\ {P}\\ {T} quantum mechanics (PTQM) and corresponding Krein space setups are studied. For models with constant non-Abelian gauge potentials and extended parity inversions compact and noncompact Lie group components are analyzed via Cartan decompositions. A Lie-triple structure is found and an interpretation as \\ {P}\\ {T}-symmetrically generalized Jaynes-Cummings model is possible with close relation to recently studied cavity QED setups with transmon states in multilevel artificial atoms. For models with Abelian gauge potentials a hidden Clifford algebra structure is found and used to obtain the fundamental symmetry of Krein space-related J-self-adjoint extensions for PTQM setups with ultra-localized potentials.
Development of a two-wavelength IR laser absorption diagnostic for propene and ethylene
NASA Astrophysics Data System (ADS)
Parise, T. C.; Davidson, D. F.; Hanson, R. K.
2018-05-01
A two-wavelength infrared laser absorption diagnostic for non-intrusive, simultaneous quantitative measurement of propene and ethylene was developed. To this end, measurements of absorption cross sections of propene and potential interfering species at 10.958 µm were acquired at high-temperatures. When used in conjunction with existing absorption cross-section measurements of ethylene and other species at 10.532 µm, a two-wavelength diagnostic was developed to simultaneously measure propene and ethylene, the two small alkenes found to generally dominate the final decomposition products of many fuel hydrocarbon pyrolysis systems. Measurements of these two species is demonstrated using this two-wavelength diagnostic scheme for propene decomposition between 1360 and 1710 K.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
NASA Astrophysics Data System (ADS)
Cortés–Vega, Luis A.
2017-12-01
In this paper, we consider modular multiplicative inverse operators (MMIO)’s of the form: J(m+n):(ℤ/(m+n)ℤ)*→ℤ/(m+n)ℤ, J(m+n)(a)=a-1. A general method to decompose {{\\mathscr{J}}}(m+n)(.) over group of units {({{Z}}/(m+n){{Z}})}* is derived. As result, an interesting decomposition law for these operators over {({{Z}}/(m+n){{Z}})}* is established. Numerical examples illustring the new results are given. This, complement some recent results obtained by the author for (MMIO)’s defined over group of units of the form {({{Z}}/\\varrho {{Z}})}* with ϱ = m × n > 2.
On the solutions of fractional order of evolution equations
NASA Astrophysics Data System (ADS)
Morales-Delgado, V. F.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-01-01
In this paper we present a discussion of generalized Cauchy problems in a diffusion wave process, we consider bi-fractional-order evolution equations in the Riemann-Liouville, Liouville-Caputo, and Caputo-Fabrizio sense. Through Fourier transforms and Laplace transform we derive closed-form solutions to the Cauchy problems mentioned above. Similarly, we establish fundamental solutions. Finally, we give an application of the above results to the determination of decompositions of Dirac type for bi-fractional-order equations and write a formula for the moments for the fractional vibration of a beam equation. This type of decomposition allows us to speak of internal degrees of freedom in the vibration of a beam equation.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Empirical projection-based basis-component decomposition method
NASA Astrophysics Data System (ADS)
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
NASA Astrophysics Data System (ADS)
Al Roumi, Fosca; Buchert, Thomas; Wiegand, Alexander
2017-12-01
The relativistic generalization of the Newtonian Lagrangian perturbation theory is investigated. In previous works, the perturbation and solution schemes that are generated by the spatially projected gravitoelectric part of the Weyl tensor were given to any order of the perturbations, together with extensions and applications for accessing the nonperturbative regime. We here discuss more in detail the general first-order scheme within the Cartan formalism including and concentrating on the gravitational wave propagation in matter. We provide master equations for all parts of Lagrangian-linearized perturbations propagating in the perturbed spacetime, and we outline the solution procedure that allows one to find general solutions. Particular emphasis is given to global properties of the Lagrangian perturbation fields by employing results of Hodge-de Rham theory. We here discuss how the Hodge decomposition relates to the standard scalar-vector-tensor decomposition. Finally, we demonstrate that we obtain the known linear perturbation solutions of the standard relativistic perturbation scheme by performing two steps: first, by restricting our solutions to perturbations that propagate on a flat unperturbed background spacetime and, second, by transforming to Eulerian background coordinates with truncation of nonlinear terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
Mathematical model of compact type evaporator
NASA Astrophysics Data System (ADS)
Borovička, Martin; Hyhlík, Tomáš
2018-06-01
In this paper, development of the mathematical model for evaporator used in heat pump circuits is covered, with focus on air dehumidification application. Main target of this ad-hoc numerical model is to simulate heat and mass transfer in evaporator for prescribed inlet conditions and different geometrical parameters. Simplified 2D mathematical model is developed in MATLAB SW. Solvers for multiple heat and mass transfer problems - plate surface temperature, condensate film temperature, local heat and mass transfer coefficients, refrigerant temperature distribution, humid air enthalpy change are included as subprocedures of this model. An automatic procedure of data transfer is developed in order to use results of MATLAB model in more complex simulation within commercial CFD code. In the end, Proper Orthogonal Decomposition (POD) method is introduced and implemented into MATLAB model.
Method and apparatus for automatically detecting patterns in digital point-ordered signals
Brudnoy, David M.
1998-01-01
The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output.
Method and apparatus for automatically detecting patterns in digital point-ordered signals
Brudnoy, D.M.
1998-10-20
The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output. 14 figs.
Coherent field propagation between tilted planes.
Stock, Johannes; Worku, Norman Girma; Gross, Herbert
2017-10-01
Propagating electromagnetic light fields between nonparallel planes is of special importance, e.g., within the design of novel computer-generated holograms or the simulation of optical systems. In contrast to the extensively discussed evaluation between parallel planes, the diffraction-based propagation of light onto a tilted plane is more burdensome, since discrete fast Fourier transforms cannot be applied directly. In this work, we propose a quasi-fast algorithm (O(N 3 log N)) that deals with this problem. Based on a proper decomposition into three rotations, the vectorial field distribution is calculated on a tilted plane using the spectrum of plane waves. The algorithm works on equidistant grids, so neither nonuniform Fourier transforms nor an explicit complex interpolation is necessary. The proposed algorithm is discussed in detail and applied to several examples of practical interest.
State estimation of spatio-temporal phenomena
NASA Astrophysics Data System (ADS)
Yu, Dan
This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.
Fuels and Lubricants. Selecting and Storing.
ERIC Educational Resources Information Center
Parady, W. Harold; Colvin, Thomas S.
The manual presents basic information for the person who plans to operate or service tractors, trucks, industrial engines, and automobiles. It tells how to select the proper fuels and lubricants and how to store them properly. Although there are no prerequisites to the study of the text, a general knowledge of engines and mobile-type vehicles is…
45 CFR 73.735-401 - General provisions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... each just financial obligation in a proper and timely manner. A “just financial obligation” is one... employees to be a matter of their own concern. However, employees shall not by failure to meet their just..., or local taxes. “In a proper and timely manner” is a manner which the Department determines does not...
19 CFR 177.11 - Requests for advice by field offices.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Requests for advice by field offices. 177.11... advice by field offices. (a) Generally. Advice or guidance as to the interpretation or proper application... prospective, current, or completed. Advice as to the proper application of the Customs and related laws to a...
19 CFR 177.11 - Requests for advice by field offices.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 2 2011-04-01 2011-04-01 false Requests for advice by field offices. 177.11... advice by field offices. (a) Generally. Advice or guidance as to the interpretation or proper application... prospective, current, or completed. Advice as to the proper application of the Customs and related laws to a...
45 CFR 153.350 - Risk adjustment data validation standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...
45 CFR 153.350 - Risk adjustment data validation standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...
NASA Astrophysics Data System (ADS)
Delon, C.; Mougin, E.; Serça, D.; Grippa, M.; Hiernaux, P.; Diawara, M.; Galy-Lacaux, C.; Kergoat, L.
2014-08-01
This work is an attempt to provide seasonal variation of biogenic NO emission fluxes in a sahelian rangeland in Mali (Agoufou, 15.34° N, 1.48° W) for years 2004, 2005, 2006, 2007 and 2008. Indeed, NO is one of the most important precursor for tropospheric ozone, and the contribution of the Sahel region in emitting NO is no more considered as negligible. The link between NO production in the soil and NO release to the atmosphere is investigated in this study, by taking into account vegetation litter production and degradation, microbial processes in the soil, emission fluxes, and environmental variables influencing these processes, using a coupled vegetation-litter decomposition-emission model. This model includes the Sahelian-Transpiration-Evaporation-Productivity (STEP) model for the simulation of herbaceous, tree leaf and fecal masses, the GENDEC model (GENeral DEComposition) for the simulation of the buried litter decomposition, and the NO emission model for the simulation of the NO flux to the atmosphere. Physical parameters (soil moisture and temperature, wind speed, sand percentage) which affect substrate diffusion and oxygen supply in the soil and influence the microbial activity, and biogeochemical parameters (pH and fertilization rate related to N content) are necessary to simulate the NO flux. The reliability of the simulated parameters is checked, in order to assess the robustness of the simulated NO flux. Simulated yearly average of NO flux ranges from 0.69 to 1.09 kg(N) ha-1 yr-1, and wet season average ranges from 1.16 to 2.08 kg(N) ha-1 yr-1. These results are in the same order as previous measurements made in several sites where the vegetation and the soil are comparable to the ones in Agoufou. This coupled vegetation-litter decomposition-emission model could be generalized at the scale of the Sahel region, and provide information where little data is available.
NASA Astrophysics Data System (ADS)
Delon, C.; Mougin, E.; Serça, D.; Grippa, M.; Hiernaux, P.; Diawara, M.; Galy-Lacaux, C.; Kergoat, L.
2015-01-01
This work is an attempt to provide seasonal variation of biogenic NO emission fluxes in a sahelian rangeland in Mali (Agoufou, 15.34° N, 1.48° W) for years 2004-2008. Indeed, NO is one of the most important precursor for tropospheric ozone, and the contribution of the Sahel region in emitting NO is no more considered as negligible. The link between NO production in the soil and NO release to the atmosphere is investigated in this study, by taking into account vegetation litter production and degradation, microbial processes in the soil, emission fluxes, and environmental variables influencing these processes, using a coupled vegetation-litter decomposition-emission model. This model includes the Sahelian-Transpiration-Evaporation-Productivity (STEP) model for the simulation of herbaceous, tree leaf and fecal masses, the GENDEC model (GENeral DEComposition) for the simulation of the buried litter decomposition and microbial dynamics, and the NO emission model (NOFlux) for the simulation of the NO release to the atmosphere. Physical parameters (soil moisture and temperature, wind speed, sand percentage) which affect substrate diffusion and oxygen supply in the soil and influence the microbial activity, and biogeochemical parameters (pH and fertilization rate related to N content) are necessary to simulate the NO flux. The reliability of the simulated parameters is checked, in order to assess the robustness of the simulated NO flux. Simulated yearly average of NO flux ranges from 0.66 to 0.96 kg(N) ha-1 yr-1, and wet season average ranges from 1.06 to 1.73 kg(N) ha-1 yr-1. These results are in the same order as previous measurements made in several sites where the vegetation and the soil are comparable to the ones in Agoufou. This coupled vegetation-litter decomposition-emission model could be generalized at the scale of the Sahel region, and provide information where little data is available.
Shao, Cenyi; Meng, Xuehui; Cui, Shichen; Wang, Jingru; Li, Chengcheng
2016-10-01
Although migrant workers are a vulnerable group in China, they demonstrably contribute to the country's economic growth and prosperity. This study aimed to describe and assess the inequality of migrant worker health in China and its association with socioeconomic determinants. The data utilized in this study were obtained from the 2012 China Labor-force Dynamics Survey conducted in 29 Chinese provinces. This study converted the self-rated health of these migrant workers into a general cardinal ill-health score. Determinants associated with migrant worker health included but were not limited to age, marital status, income, and education, among other factors. Concentration index, concentration curve, and decomposition of the concentration index were employed to measure socioeconomic inequality in migrant workers' health. Prorich inequality was found in the health of migrant workers. The concentration index was -0.0866, as a score indicator of ill health. Decomposition of the concentration index revealed that the factors most contributing to the observed inequality were income, followed by gender, age, marital status, and smoking history. It is generally known that there is an unequal socioeconomic distribution of migrant worker health in China. In order to reduce the health inequality, the government should make a substantial effort to strengthen policy implementation in improving the income distribution for vulnerable groups. After this investigation, it is apparent that the findings we have made warrant further investigation. Copyright © 2016. Published by Elsevier Taiwan LLC.
NASA Astrophysics Data System (ADS)
Austin, A.; Ballare, C. L.; Méndez, M. S.
2015-12-01
Plant litter decomposition is an essential process in the first stages of carbon and nutrient turnover in terrestrial ecosystems, and together with soil microbial biomass, provide the principal inputs of carbon for the formation of soil organic matter. Photodegradation, the photochemical mineralization of organic matter, has been recently identified as a mechanism for previously unexplained high rates of litter mass loss in low rainfall ecosystems; however, the generality of this process as a control on carbon cycling in terrestrial ecosystems is not known, and the indirect effects of photodegradation on biotic stimulation of carbon turnover have been debated in recent studies. We demonstrate that in a wide range of plant species, previous exposure to solar radiation, and visible light in particular, enhanced subsequent biotic degradation of leaf litter. Moreover, we demonstrate that the mechanism for this enhancement involves increased accessibility for microbial enzymes to plant litter carbohydrates due to a reduction in lignin content. Photodegradation of plant litter reduces the structural and chemical bottleneck imposed by lignin in secondary cell walls. In litter from woody plant species, specific interactions with ultraviolet radiation obscured facilitative effects of solar radiation on biotic decomposition. The generalized positive effect of solar radiation exposure on subsequent microbial activity is mediated by increased accessibility to cell wall polysaccharides, which suggests that photodegradation is quantitatively important in determining rates of mass loss, nutrient release and the carbon balance in a broad range of terrestrial ecosystems.
On bipartite pure-state entanglement structure in terms of disentanglement
NASA Astrophysics Data System (ADS)
Herbut, Fedor
2006-12-01
Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.
Sampling considerations for modal analysis with damping
NASA Astrophysics Data System (ADS)
Park, Jae Young; Wakin, Michael B.; Gilbert, Anna C.
2015-03-01
Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost. However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit. In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique{also known as the Proper Orthogonal Decomposition (POD){can correctly estimate the mode shapes of the structure. Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure's mode shapes to a desired level of accuracy. In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping. In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure's mode shapes. We support our discussion with new analytical insight and experimental demonstrations. In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure's mode shapes can be estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yishen; Zhou, Zhi; Liu, Cong
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less
Performance evaluation of integrated solid-liquid wastes treatment technology in palm oil industry
NASA Astrophysics Data System (ADS)
Amelia, J. R.; Suprihatin, S.; Indrasti, N. S.; Hasanudin, U.; Fujie, K.
2017-05-01
The oil palm industry significantly contributes to environmental degradation if without waste management properly. The newest alternative waste management that might be developed is by utilizing the effluent of POME anaerobic digestion with EFB through integrated anaerobic decomposition process. The aim of this research was to examine and evaluate the integrated solid-liquid waste treatment technology in the view point of greenhouse gasses emission, compost, and biogas production. POME was treated in anaerobic digester with loading rate about 1.65 gCOD/L/day. Treated POME with dosis of 15 and 20 L/day was sprayed to the anaerobic digester that was filled of 25 kg of EFB. The results of research showed that after 60 days, the C/N ratio of EFB decreased to 12.67 and 10.96 for dosis of treated POME 15 and 20 L/day, respectively. In case of 60 day decomposition, the integrated waste treatment technology could produce 51.01 and 34.34 m3/Ton FFB which was equivalent with 636,44 and 466,58 kgCO2e/ton FFB for dosis of treated POME 15 and 20 L/day, respectively. The results of research also showed that integrated solid-liquid wastes treatment technology could reduce GHG emission about 421.20 and 251.34 kgCO2e/ton FFB for dosis of treated POME 15 and 20 L/day, respectively.
Process R&D for Particle Size Control of Molybdenum Oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Sujat; Dzwiniel, Trevor; Pupek, Krzysztof
The primary goal of this study was to produce MoO 3 powder with a particle size range of 50 to 200 μm for use in targets for production of the medical isotope 99Mo. Molybdenum metal powder is commercially produced by thermal reduction of oxides in a hydrogen atmosphere. The most common source material is MoO 3, which is derived by the thermal decomposition of ammonium heptamolybdate (AHM). However, the particle size of the currently produced MoO 3 is too small, resulting in Mo powder that is too fine to properly sinter and press into the desired target. In this study,more » effects of heating rate, heating temperature, gas type, gas flow rate, and isothermal heating were investigated for the decomposition of AHM. The main conclusions were as follows: lower heating rate (2-10°C/min) minimizes breakdown of aggregates, recrystallized samples with millimeter-sized aggregates are resistant to various heat treatments, extended isothermal heating at >600°C leads to significant sintering, and inert gas and high gas flow rate (up to 2000 ml/min) did not significantly affect particle size distribution or composition. In addition, attempts to recover AHM from an aqueous solution by several methods (spray drying, precipitation, and low temperature crystallization) failed to achieve the desired particle size range of 50 to 200 μm. Further studies are planned.« less
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2011 CFR
2011-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
Modeling Ability Differentiation in the Second-Order Factor Model
ERIC Educational Resources Information Center
Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.
2011-01-01
In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…
Oxygen from Hydrogen Peroxide. A Safe Molar Volume-Molar Mass Experiment.
ERIC Educational Resources Information Center
Bedenbaugh, John H.; And Others
1988-01-01
Describes a molar volume-molar mass experiment for use in general chemistry laboratories. Gives background technical information, procedures for the titration of aqueous hydrogen peroxide with standard potassium permanganate and catalytic decomposition of hydrogen peroxide to produce oxygen, and a discussion of the results obtained in three…
A Decomposition Approach for Shipboard Manpower Scheduling
2009-01-01
generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates
Spreadsheets and Bulgarian Goats
ERIC Educational Resources Information Center
Sugden, Steve
2012-01-01
We consider a problem appearing in an Australian Mathematics Challenge in 2003. This article considers whether a spreadsheet might be used to model this problem, thus allowing students to explore its structure within the spreadsheet environment. It then goes on to reflect on some general principles of problem decomposition when the final goal is a…
de Jonge, J; van Trijp, J C M; van der Lans, I A; Renes, R J; Frewer, L J
2008-09-01
This paper investigates the relationship between general consumer confidence in the safety of food and consumer trust in institutions and organizations. More specifically, using a decompositional regression analysis approach, the extent to which the strength of the relationship between trust and general confidence is dependent upon a particular food chain actor (for example, food manufacturers) is assessed. In addition, the impact of specific subdimensions of trust, such as openness, on consumer confidence are analyzed, as well as interaction effects of actors and subdimensions of trust. The results confirm previous findings, which indicate that a higher level of trust is associated with a higher level of confidence. However, the results from the current study extend on previous findings by disentangling the effects that determine the strength of this relationship into specific components associated with the different actors, the different trust dimensions, and specific combinations of actors and trust dimensions. The results show that trust in food manufacturers influences general confidence more than trust in other food chain actors, and that care is the most important trust dimension. However, the contribution of a particular trust dimension in enhancing general confidence is actor-specific, suggesting that different actors should focus on different trust dimensions when the purpose is to enhance consumer confidence in food safety. Implications for the development of communication strategies that are designed to regain or maintain consumer confidence in the safety of food are discussed.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Michaud, Jean-Philippe; Moreau, Gaétan
2013-07-01
Experimental protocols in forensic entomology successional field studies generally involve daily sampling of insects to document temporal changes in species composition on animal carcasses. One challenge with that method has been to adjust the sampling intensity to obtain the best representation of the community present without affecting the said community. To this date, little is known about how such investigator perturbations affect decomposition-related processes. Here, we investigated how different levels of daily sampling of fly eggs and fly larvae affected, over time, carcass decomposition rate and the carrion insect community. Results indicated that a daily sampling of <5% of the egg and larvae volumes present on a carcass, a sampling intensity believed to be consistent with current accepted practices in successional field studies, had little effect overall. Higher sampling intensities, however, slowed down carcass decomposition, affected the abundance of certain carrion insects, and caused an increase in the volume of eggs laid by dipterans. This study suggests that the carrion insect community not only has a limited resilience to recurrent perturbations but that a daily sampling intensity equal to or <5% of the egg and larvae volumes appears adequate to ensure that the system is representative of unsampled conditions. Hence we propose that this threshold be accepted as best practice in future forensic entomology successional field studies.
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...
2017-03-07
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
Flexible Mediation Analysis With Multiple Mediators.
Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn
2017-07-15
The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Organic Matter Quality and its Influence on Carbon Turnover and Stabilization in Northern Peatlands
NASA Astrophysics Data System (ADS)
Turetsky, M. R.; Wieder, R. K.
2002-12-01
Peatlands cover 3-5 % of the world's ice-free land area, but store about 33 % of global terrestrial soil carbon. Peat accumulation in northern regions generally is controlled by slow decomposition, which may be limited by cold temperatures and water-logging. Poor organic matter quality also may limit decay, and microbial activity in peatlands likely is regulated by the availability of labile carbon and/or nutrients. Conversely, carbon in recalcitrant soil structures may be chemically protected from microbial decay, particularly in peatlands where carbon can be buried in anaerobic soils. Soil organic matter quality is controlled by plant litter chemical composition and the susceptibility of organic compounds to decomposition through time. There are a number of techniques available for characterizing organic quality, ranging from chemical proximate or elemental analysis to more qualitative methods such as nuclear magenetic resonance, pyrolysis/mass spectroscopy, and Fourier transform infrared spectroscopy. We generally have relied on proximate analysis for quantitative determination of several organic fractions (i.e., water-soluble carbohydrates, soluble nonpolars, water-soluble phenolics, holocellulose, and acid insoluble material). Our approaches to studying organic matter quality in relation to C turnover in peatlands include 1) 14C labelling of peatland vegetation along a latitudinal gradient in North America, allowing us to follow the fate of 14C tracer in belowground organic fractions under varying climates, 2) litter bag studies focusing on the role of individual moss species in litter quality and organic matter decomposition, and 3) laboratory incubations of peat to explore relationships between organic matter quality and decay. These studies suggest that proximate organic fractions vary in lability, but that turnover of organic matter is influenced both by plant species and climate. Across boreal peatlands, measures of soil recalcitrance such as acid insoluble material (AIM) and AIM/N were significant predictors of decomposition. However, when limited to individual peatland features or bryophyte species, soluble proximate fractions were better predictors of organic matter decay. This suggests that decomposition within single litter or peat types is controlled by the size of relatively small, labile carbon pools. As peatlands store the majority of soil carbon in the boreal forest, the influences of peat quality on carbon storage and turnover should be considered in understanding the fate of carbon in northern ecosystems.
Proton spin structure from measurable parton distributions.
Ji, Xiangdong; Xiong, Xiaonu; Yuan, Feng
2012-10-12
We present a systematic study of the proton spin structure in terms of measurable parton distributions. For a transversely polarized proton, we derive a polarization sum rule from the leading generalized parton distributions appearing in hard exclusive processes. For a longitudinally polarized proton, we obtain a helicity decomposition from well-known quark and gluon helicity distributions and orbital angular-momentum contributions. The latter are shown to be related to measurable subleading generalized parton distributions and quantum-phase space Wigner distributions.
Invariant object recognition based on the generalized discrete radon transform
NASA Astrophysics Data System (ADS)
Easley, Glenn R.; Colonna, Flavia
2004-04-01
We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.
Kinematics of our Galaxy from the PMA and TGAS catalogues
NASA Astrophysics Data System (ADS)
Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.
2018-04-01
We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.
NASA Astrophysics Data System (ADS)
Roehl, Jan Hendrik; Oberrath, Jens
2016-09-01
``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.
Production of furfural from palm oil empty fruit bunches: kinetic model comparation
NASA Astrophysics Data System (ADS)
Panjaitan, J. R. H.; Monica, S.; Gozan, M.
2017-05-01
Furfural is a chemical compound that can be applied to pharmaceuticals, cosmetics, resins and cleaning compound which can be produced by acid hydrolysis of biomass. Indonesia’s demand for furfural in 2010 reached 790 tons that still imported mostly 72% from China. In this study, reaction kinetic models of furfural production from oil palm empty fruit bunches with submitting acid catalyst at the beginning of the experiment will be determine. Kinetic data will be obtained from hydrolysis of empty oil palm bunches using sulfuric acid catalyst 3% at temperature 170°C, 180°C and 190°C for 20 minutes. From this study, the kinetic model to describe the production of furfural is the kinetic model where generally hydrolysis reaction with an acid catalyst in hemicellulose and furfural will produce the same decomposition product which is formic acid with different reaction pathways. The activation energy obtained for the formation of furfural, the formation of decomposition products from furfural and the formation of decomposition products from hemicellulose is 8.240 kJ/mol, 19.912 kJ/mol and -39.267 kJ / mol.
Ferroelectric based catalysis: Switchable surface chemistry
NASA Astrophysics Data System (ADS)
Kakekhani, Arvin; Ismail-Beigi, Sohrab
2015-03-01
We describe a new class of catalysts that uses an epitaxial monolayer of a transition metal oxide on a ferroelectric substrate. The ferroelectric polarization switches the surface chemistry between strongly adsorptive and strongly desorptive regimes, circumventing difficulties encountered on non-switchable catalytic surfaces where the Sabatier principle dictates a moderate surface-molecule interaction strength. This method is general and can, in principle, be applied to many reactions, and for each case the choice of the transition oxide monolayer can be optimized. Here, as a specific example, we show how simultaneous NOx direct decomposition (into N2 and O2) and CO oxidation can be achieved efficiently on CrO2 terminated PbTiO3, while circumventing oxygen (and sulfur) poisoning issues. One should note that NOx direct decomposition has been an open challenge in automotive emission control industry. Our method can expand the range of catalytically active elements to those which are not conventionally considered for catalysis and which are more economical, e.g., Cr (for NOx direct decomposition and CO oxidation) instead of canonical precious metal catalysts. Primary support from Toyota Motor Engineering and Manufacturing, North America, Inc.
Solventless synthesis, morphology, structure and magnetic properties of iron oxide nanoparticles
NASA Astrophysics Data System (ADS)
Das, Bratati; Kusz, Joachim; Reddy, V. Raghavendra; Zubko, Maciej; Bhattacharjee, Ashis
2017-12-01
In this study we report the solventless synthesis of iron oxide through thermal decomposition of acetyl ferrocene as well as its mixtures with maliec anhydride and characterization of the synthesized product by various comprehensive physical techniques. Morphology, size and structure of the reaction products were investigated by scanning electron microscopy, transmission electron microscopy and X-ray powder diffraction technique, respectively. Physical characterization techniques like FT-IR spectroscopy, dc magnetization study as well as 57Fe Mössbauer spectroscopy were employed to characterize the magnetic property of the product. The results observed from these studies unequivocally established that the synthesized materials are hematite. Thermal decomposition has been studied with the help of thermogravimetry. Reaction pathway for synthesis of hematite has been proposed. It is noted that maliec anhydride in the solid reaction environment as well as the gaseous reaction atmosphere strongly affect the reaction yield as well as the particle size. In general, a method of preparing hematite nanoparticles through solventless thermal decomposition technique using organometallic compounds and the possible use of reaction promoter have been discussed in detail.
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Buzatu, Andrei; Dill, Harald G; Buzgar, Nicolae; Damian, Gheorghe; Maftei, Andreea Elena; Apopei, Andrei Ionuț
2016-01-15
The Baia Sprie epithermal system, a well-known deposit for its impressive mineralogical associations, shows the proper conditions for acid mine drainage and can be considered a general example for affected mining areas around the globe. Efflorescent samples from the abandoned open pit Minei Hill have been analyzed by X-ray diffraction (XRD), scanning electron microscopy (SEM), Raman and near-infrared (NIR) spectrometry. The identified phases represent mostly iron sulfates with different hydration degrees (szomolnokite, rozenite, melanterite, coquimbite, ferricopiapite), Zn and Al sulfates (gunningite, alunogen, halotrichite). The samples were heated at different temperatures in order to establish the phase transformations among the studied sulfates. The dehydration temperatures and intermediate phases upon decomposition were successfully identified for each of mineral phases. Gunningite was the single sulfate that showed no transformations during the heating experiment. All the other sulfates started to dehydrate within the 30-90 °C temperature range. The acid mine drainage is the main cause for sulfates formation, triggered by pyrite oxidation as the major source for the abundant iron sulfates. Based on the dehydration temperatures, the climatological interpretation indicated that melanterite formation and long-term presence is related to continental and temperate climates. Coquimbite and rozenite are attributed also to the dry arid/semi-arid areas, in addition to the above mentioned ones. The more stable sulfates, alunogen, halotrichite, szomolnokite, ferricopiapite and gunningite, can form and persists in all climate regimes, from dry continental to even tropical humid. Copyright © 2015 Elsevier B.V. All rights reserved.
15 CFR 904.211 - Failure to appear.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Failure to appear. 904.211 Section 904... Hearing and Appeal Procedures General § 904.211 Failure to appear. (a) If, after proper service of notice... hearing. (d) The Judge may deem a failure of a party to appear after proper notice a waiver of any right...
15 CFR 904.211 - Failure to appear.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Failure to appear. 904.211 Section 904... Hearing and Appeal Procedures General § 904.211 Failure to appear. (a) If, after proper service of notice... hearing. (d) The Judge may deem a failure of a party to appear after proper notice a waiver of any right...
15 CFR 904.211 - Failure to appear.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Failure to appear. 904.211 Section 904... Hearing and Appeal Procedures General § 904.211 Failure to appear. (a) If, after proper service of notice... hearing. (d) The Judge may deem a failure of a party to appear after proper notice a waiver of any right...
Gao, Bo-Cai; Chen, Wei
2012-06-20
The visible/infrared imaging radiometer suite (VIIRS) is now onboard the first satellite platform managed by the Joint Polar Satellite System of the National Oceanic and Atmospheric Administration and NASA. It collects scientific data from an altitude of approximately 830 km in 22 narrow bands located in the 0.4-12.5 μm range. The seven visible and near-infrared (VisNIR) bands in the wavelength interval between 0.4-0.9 μm are known to suffer from the out-of-band (OOB) responses--a small amount of radiances far away from the center of a given band that can pass through the filter and reach detectors in the focal plane. A proper treatment of the OOB effects is necessary in order to obtain calibrated at-sensor radiance data [referred to as the Sensor Data Records (SDRs)] from measurements with these bands and subsequently to derive higher-level data products [referred to as the Environmental Data Records (EDRs)]. We have recently developed a new technique, called multispectral decomposition transform (MDT), which can be used to correct/remove the OOB effects of VIIRS VisNIR bands and to recover the true narrow band radiances from the measured radiances containing OOB effects. An MDT matrix is derived from the laboratory-measured filter transmittance functions. The recovery of the narrow band signals is performed through a matrix multiplication--the production between the MDT matrix and a multispectral vector. Hyperspectral imaging data measured from high altitude aircraft and satellite platforms, the complete VIIRS filter functions, and the truncated VIIRS filter functions to narrower spectral intervals, are used to simulate the VIIRS data with and without OOB effects. Our experimental results using the proposed MDT method have demonstrated that the average errors after decomposition are reduced by more than one order of magnitude.
Effect of proper oral rehabilitation on general health of mandibulectomy patients
Mustafa, Ammar A; Raad, Kais; Mustafa, Nazih S
2015-01-01
Key Clinical Message Here, we aimed to assess whether postoperative oral rehabilitation for mandibulectomy patients is necessary to improve patients’ general health in terms of health-related quality of life. PMID:26576270
45 CFR 671.7 - General issuance criteria.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 3 2011-10-01 2011-10-01 false General issuance criteria. 671.7 Section 671.7 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION WASTE REGULATION Permits § 671.7 General issuance criteria. (a) Upon receipt of a complete and properly executed...
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
Turbulence and entrainment length scales in large wind farms.
Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F
2017-04-13
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Turbulence and entrainment length scales in large wind farms
2017-01-01
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028
Zhang, Zhongqiang; Yang, Xiu; Lin, Guang
2016-04-14
Sensor placement at the extrema of Proper Orthogonal Decomposition (POD) is efficient and leads to accurate reconstruction of the wind field from a limited number of measure- ments. In this paper we extend this approach of sensor placement and take into account measurement errors and detect possible malfunctioning sensors. We use the 48 hourly spa- tial wind field simulation data sets simulated using the Weather Research an Forecasting (WRF) model applied to the Maine Bay to evaluate the performances of our methods. Specifically, we use an exclusion disk strategy to distribute sensors when the extrema of POD modes are close.more » It turns out that this strategy can also reduce the error of recon- struction from noise measurements. Also, by a cross-validation technique, we successfully locate the malfunctioning sensors.« less
Real-Time XRD Studies of Li-O2 Electrochemical Reaction in Nonaqueous Lithium-Oxygen Battery.
Lim, Hyunseob; Yilmaz, Eda; Byon, Hye Ryung
2012-11-01
Understanding of electrochemical process in rechargeable Li-O2 battery has suffered from lack of proper analytical tool, especially related to the identification of chemical species and number of electrons involved in the discharge/recharge process. Here we present a simple and straightforward analytical method for simultaneously attaining chemical and quantified information of Li2O2 (discharge product) and byproducts using in situ XRD measurement. By real-time monitoring of solid-state Li2O2 peak area, the accurate efficiency of Li2O2 formation and the number of electrons can be evaluated during full discharge. Furthermore, by observation of sequential area change of Li2O2 peak during recharge, we found nonlinearity of Li2O2 decomposition rate for the first time in ether-based electrolyte.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
Toward a More Robust Pruning Procedure for MLP Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.
Modular neural networks: a survey.
Auda, G; Kamel, M
1999-04-01
Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented.
Coupled fluid-structure interaction. Part 1: Theory. Part 2: Application
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Ohayon, Roger
1991-01-01
A general three dimensional variational principle is obtained for the motion of an acoustic field enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. Semidiscrete finite element equations of motion based on this principle are derived and sample cases are given.
Characteristic-based algorithms for flows in thermo-chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David
1990-01-01
A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.
28 CFR 16.71 - Exemption of the Office of the Deputy Attorney General System-limited access.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) and (4); (d); (e)(1), (2), (3) and (5); and (g). (d) The exemptions for the General Files System apply... disclosures from the General Files System may reveal information that is properly classified pursuant to... a certain investigation. In addition, release of records from the General Files System may reveal...
Solving Cubic Equations by Polynomial Decomposition
ERIC Educational Resources Information Center
Kulkarni, Raghavendra G.
2011-01-01
Several mathematicians struggled to solve cubic equations, and in 1515 Scipione del Ferro reportedly solved the cubic while participating in a local mathematical contest, but did not bother to publish his method. Then it was Cardano (1539) who first published the solution to the general cubic equation in his book "The Great Art, or, The Rules of…
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
33 CFR 150.607 - What are the general safe working requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... subchapter. (b) All machinery and equipment must be maintained in proper working order or removed. Personal Protective Equipment ... Workplace Conditions § 150.607 What are the general safe working requirements? (a) All equipment, including...
33 CFR 150.607 - What are the general safe working requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... subchapter. (b) All machinery and equipment must be maintained in proper working order or removed. Personal Protective Equipment ... Workplace Conditions § 150.607 What are the general safe working requirements? (a) All equipment, including...
Guidelines for maintaining and managing the vaccine cold chain.
2003-10-24
In February 2002, the Advisory Committee on Immunization Practices (ACIP) and American Academy of Family Physicians (AAFP) released their revised General Recommendations on Immunization, which included recommendations on the storage and handling of immunobiologics. Because of increased concern over the potential for errors with the vaccine cold chain (i.e., maintaining proper vaccine temperatures during storage and handling to preserve potency), this notice advises vaccine providers of the importance of proper cold chain management practices. This report describes proper storage units and storage temperatures, outlines appropriate temperature-monitoring practices, and recommends steps for evaluating a temperature-monitoring program. The success of efforts against vaccine-preventable diseases is attributable in part to proper storage and handling of vaccines. Exposure of vaccines to temperatures outside the recommended ranges can affect potency adversely, thereby reducing protection from vaccine-preventable diseases. Good practices to maintain proper vaccine storage and handling can ensure that the full benefit of immunization is realized.
Gustafsson, Per E; Linander, Ida; Mosquera, Paola A
2017-01-21
Studies from Sweden and abroad have established health inequalities between heterosexual and non-heterosexual people. Few studies have examined the underpinnings of such sexual orientation inequalities in health. To expand this literature, the present study aimed to employ decomposition analysis to explain health inequalities between people with heterosexual and non-heterosexual orientation in Sweden, a country with an international reputation for heeding the human rights of non-heterosexual people. Participants (N = 23,446) came from a population-based cross-sectional survey in the four northernmost counties in Sweden in 2014. Participants completed self-administered questionnaires, covering sexual orientation, mental and general physical health, social conditions and unmet health care needs, and sociodemographic data was retrieved from total population registers. Sexual orientation inequalities in health were decomposed by Blinder-Oaxaca decomposition analysis. Results showed noticeable mental and general health inequalities between heterosexual and non-heterosexual orientation groups. Health inequalities were partly explained (total explained fraction 64-74%) by inequalities in degrading treatment (24-26% of the explained fraction), but to a considerable degree also by material conditions (38-45%) and unmet care needs (25-43%). Psychosocial experiences may be insufficient to explain and understand health inequalities by sexual orientation in a reputedly 'gay-friendly' setting. Less overt forms of structural discrimination may need to be considered to capture the pervasive material discrimination that seems to underpin the embodiment of sexual minority inequalities. This ought to be taken into consideration in research, policy-making and monitoring aiming to work towards equity in health across sexual orientations.
Merriman, L S; Moore, T L C; Wang, J W; Osmond, D L; Al-Rubaei, A M; Smolek, A P; Blecken, G T; Viklander, M; Hunt, W F
2017-04-01
The carbon sequestration services of stormwater wet retention ponds were investigated in four different climates: U.S., Northern Sweden, Southern Sweden, and Singapore, representing a range of annual mean temperatures, growing season lengths and rainfall depths: geographic factors that were not statistically compared, but have great effect on carbon (C) accumulation. A chronosequence was used to estimate C accumulations rates; C accumulation and decomposition rates were not directly measured. C accumulated significantly over time in vegetated shallow water areas (0-30cm) in the USA (78.4gCm -2 yr -1 ), in vegetated temporary inundation zones in Sweden (75.8gCm -2 yr -1 ), and in all ponds in Singapore (135gCm -2 yr -1 ). Vegetative production appeared to exert a stronger influence on relative C accumulation rates than decomposition. Comparing among the four climatic zones, the effects of increasing rainfall and growing season lengths (vegetative production) outweighed the effects of higher temperature on decomposition rates. Littoral vegetation was a significant source to the soil C pool relative to C sources draining from watersheds. Establishment of vegetation in the shallow water zones of retention ponds is vital to providing a C source to the soil. Thus, the width of littoral shelves containing this vegetation along the perimeter may be increased if C sequestration is a design goal. This assessment establishes that stormwater wet retention ponds can sequester C across different climate zones with generally annual rainfall and lengths of growing season being important general factors for C accumulation. Copyright © 2017 Elsevier B.V. All rights reserved.
Kim, Ki-Tae
2017-06-01
: When analysing the relationships between income inequality, welfare regimes and aggregate health at the cross-national level, previous primary articles and systematic reviews reach inconsistent conclusions. Contrary to theoretical expectations, equal societies or the Social Democratic welfare regime do not always have the best aggregate health when compared with those of other relatively unequal societies or other welfare regimes. This article will shed light on the controversial subjects with a new decomposition systematic review method. The decomposition systematic review method breaks down an individual empirical article, if necessary, into multiple findings based on an article's use of the following four components: independent variable, dependent variable, method and dataset. This decomposition method extracts 107 findings from the selected 48 articles, demonstrating the dynamics between the four components. 'The age threshold effect' is recognized over which the hypothesized relations between income inequality, welfare regimes and aggregate health reverse. The hypothesis is supported mainly for younger infant and child health indicators, but not for adult health or general health indicators such as life expectancy. Further three threshold effects (income, gender and period) have also been put forward. The negative relationship between income inequality and aggregate health, often termed as the Wilkinson Hypothesis, was not generally observed in all health indicators except for infant and child mortality. The Scandinavian welfare regime reveals worse-than-expected outcomes in all health indicators except infant and child mortality. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
NASA Astrophysics Data System (ADS)
Agethen, Svenja; Knorr, Klaus-Holger
2017-04-01
More than 90% of peatlands in Europe are degraded by drainage and subsequent land use. However, beneficial effects of functioning peatlands, most of all carbon storage, have long been recognized but remain difficult to recover. Fragmentation and a surrounding of intensively used agricultural catchments with excess nutrients in air and waters further affects the recovery of sites. Under such conditions, highly competitive species such as Juncus effusus colonize restored peatlands instead of peat forming Sphagnum. While the specific stoichiometry and chemical composition makes Sphagnum litter recalcitrant in decomposition and hence, effective in carbon sequestration, we know little about dynamics involving Juncus, although this species provides organic matter in high quantity and of rather labile quality. To better understand decomposition in context of litter quality and nutrient availability, we incubated different peat types for 70 days; I) recent, II) weakly degraded fossil, and III) earthyfied nutrient rich fossil peat, amended with two 13C pulse-labelled Juncus litter types (excessively fertilized "F", and nutrient poor "NF" plants grown for three years watered with MilliQ only), respectively. We determined anaerobic decomposition rates, compared potential rates extrapolated from pure materials with measured rates of the mixtures, and tracked the 13C in the solid, liquid, and gaseous phase. To characterize the biogeochemical conditions, inorganic and organic electron acceptors, hydrogen and organic acids, and total enzyme activity were monitored. For characterization of dissolved organic matter we used UV-Vis and fluorescence spectroscopy (parallel factor analysis), and for solid organic matter elemental analysis and FTIR spectroscopy. There were two main structural differences between litter types: "F" litter and its leachates contained more proteinaceous components, the C/N ratio was 20 in contrast to 60 of the "NF" litter. However, humic components and aromaticity were higher in "F" litter. Generally, decomposition rates of litter were 5-30 times higher than of peat. Rates in batches amended with "F" were lower compared to "NF" for the respective peat, opposing typically reported observations. Nevertheless, the 13C label suggested that in case of peat I and III preferably the litter was decomposed, decomposition of peat II was apparently stimulated when "NF" was added, albeit this litter was poor in nutrients. Multiple linear regression identified specific absorption at 254 nm (SUVA), a measure of aromaticity representative for an array of inter-correlating spectroscopic features, and enzyme activity as most important predictors for C-mineralization rates. These two parameters explained 88% of the variance. Although enzyme activity and SUVA did not correlate in the mixed assays, this was the case for the pure materials (R2=0.95), suggesting an inhibitory effect of aromatic components on enzyme activity. This study confirms that generally litter quality is a major control for mineralization and hence, carbon storage in peatlands. Interestingly, in the case of Juncus effusus, high nutrient availability in peat and litter did not lead to enhanced degradation of the litter itself or priming of decomposition of the surrounding peat. Furthermore, the results underline the substantial contribution of Juncus biomass to C-cycling and potentially high C-emissions in restored peatlands.
NASA Astrophysics Data System (ADS)
Parker, Edward
2017-08-01
A nonrelativistic particle released from rest at the edge of a ball of uniform charge density or mass density oscillates with simple harmonic motion. We consider the relativistic generalizations of these situations where the particle can attain speeds arbitrarily close to the speed of light; generalizing the electrostatic and gravitational cases requires special and general relativity, respectively. We find exact closed-form relations between the position, proper time, and coordinate time in both cases, and find that they are no longer harmonic, with oscillation periods that depend on the amplitude. In the highly relativistic limit of both cases, the particle spends almost all of its proper time near the turning points, but almost all of the coordinate time moving through the bulk of the ball. Buchdahl's theorem imposes nontrivial constraints on the general-relativistic case, as a ball of given density can only attain a finite maximum radius before collapsing into a black hole. This article is intended to be pedagogical, and should be accessible to those who have taken an undergraduate course in general relativity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS Computations § 15.801 General. The OCMI will determine the specific manning levels for vessels required to have... properly manning vessels in accordance with the applicable laws, regulations, and international conventions...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS Computations § 15.801 General. The OCMI will determine the specific manning levels for vessels required to have... properly manning vessels in accordance with the applicable laws, regulations, and international conventions...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS Computations § 15.801 General. The OCMI will determine the specific manning levels for vessels required to have... properly manning vessels in accordance with the applicable laws, regulations, and international conventions...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS Computations § 15.801 General. The OCMI will determine the specific manning levels for vessels required to have... properly manning vessels in accordance with the applicable laws, regulations, and international conventions...
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS Computations § 15.801 General. The OCMI will determine the specific manning levels for vessels required to have... properly manning vessels in accordance with the applicable laws, regulations, and international conventions...
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Chu, Fulei; Zuo, Ming J.
2011-03-01
Energy separation algorithm is good at tracking instantaneous changes in frequency and amplitude of modulated signals, but it is subject to the constraints of mono-component and narrow band. In most cases, time-varying modulated vibration signals of machinery consist of multiple components, and have so complicated instantaneous frequency trajectories on time-frequency plane that they overlap in frequency domain. For such signals, conventional filters fail to obtain mono-components of narrow band, and their rectangular decomposition of time-frequency plane may split instantaneous frequency trajectories thus resulting in information loss. Regarding the advantage of generalized demodulation method in decomposing multi-component signals into mono-components, an iterative generalized demodulation method is used as a preprocessing tool to separate signals into mono-components, so as to satisfy the requirements by energy separation algorithm. By this improvement, energy separation algorithm can be generalized to a broad range of signals, as long as the instantaneous frequency trajectories of signal components do not intersect on time-frequency plane. Due to the good adaptability of energy separation algorithm to instantaneous changes in signals and the mono-component decomposition nature of generalized demodulation, the derived time-frequency energy distribution has fine resolution and is free from cross term interferences. The good performance of the proposed time-frequency analysis is illustrated by analyses of a simulated signal and the on-site recorded nonstationary vibration signal of a hydroturbine rotor during a shut-down transient process, showing that it has potential to analyze time-varying modulated signals of multi-components.
A decomposition theory for phylogenetic networks and incompatible characters.
Gusfield, Dan; Bansal, Vikas; Bafna, Vineet; Song, Yun S
2007-12-01
Phylogenetic networks are models of evolution that go beyond trees, incorporating non-tree-like biological events such as recombination (or more generally reticulation), which occur either in a single species (meiotic recombination) or between species (reticulation due to lateral gene transfer and hybrid speciation). The central algorithmic problems are to reconstruct a plausible history of mutations and non-tree-like events, or to determine the minimum number of such events needed to derive a given set of binary sequences, allowing one mutation per site. Meiotic recombination, reticulation and recurrent mutation can cause conflict or incompatibility between pairs of sites (or characters) of the input. Previously, we used "conflict graphs" and "incompatibility graphs" to compute lower bounds on the minimum number of recombination nodes needed, and to efficiently solve constrained cases of the minimization problem. Those results exposed the structural and algorithmic importance of the non-trivial connected components of those two graphs. In this paper, we more fully develop the structural importance of non-trivial connected components of the incompatibility and conflict graphs, proving a general decomposition theorem (Gusfield and Bansal, 2005) for phylogenetic networks. The decomposition theorem depends only on the incompatibilities in the input sequences, and hence applies to many types of phylogenetic networks, and to any biological phenomena that causes pairwise incompatibilities. More generally, the proof of the decomposition theorem exposes a maximal embedded tree structure that exists in the network when the sequences cannot be derived on a perfect phylogenetic tree. This extends the theory of perfect phylogeny in a natural and important way. The proof is constructive and leads to a polynomial-time algorithm to find the unique underlying maximal tree structure. We next examine and fully solve the major open question from Gusfield and Bansal (2005): Is it true that for every input there must be a fully decomposed phylogenetic network that minimizes the number of recombination nodes used, over all phylogenetic networks for the input. We previously conjectured that the answer is yes. In this paper, we show that the answer in is no, both for the case that only single-crossover recombination is allowed, and also for the case that unbounded multiple-crossover recombination is allowed. The latter case also resolves a conjecture recently stated in (Huson and Klopper, 2007) in the context of reticulation networks. Although the conjecture from Gusfield and Bansal (2005) is disproved in general, we show that the answer to the conjecture is yes in several natural special cases, and establish necessary combinatorial structure that counterexamples to the conjecture must possess. We also show that counterexamples to the conjecture are rare (for the case of single-crossover recombination) in simulated data.
Soil fungal community shift evaluation as a potential cadaver decomposition indicator.
Chimutsa, Monica; Olakanye, Ayodeji O; Thompson, Tim J U; Ralebitso-Senior, T Komang
2015-12-01
Fungi metabolise organic matter in situ and so alter both the bio-/physico-chemical properties and microbial community structure of the ecosystem. In particular, they are responsible reportedly for specific stages of decomposition. Therefore, this study aimed to extend previous bacteria-based forensic ecogenomics research by investigating soil fungal community and cadaver decomposition interactions in microcosms with garden soil (20 kg, fresh weight) and domestic pig (Sus scrofa domesticus) carcass (5 kg, leg). Soil samples were collected at depths of 0-10 cm, 10-20 cm and 20-30 cm on days 3, 28 and 77 in the absence (control -Pg) and presence (experimental +Pg) of Sus scrofa domesticus and used for total DNA extraction and nested polymerase chain reaction and denaturing gradient gel electrophoresis (PCR-DGGE) profiling of the 18S rRNA gene. The Shannon-Wiener (H') community diversity indices were 1.25±0.21 and 1.49±0.30 for the control and experimental microcosms, respectively, while comparable Simpson species dominance (S) values were 0.65±0.109 and 0.75±0.015. Generally, and in contrast to parallel studies of the bacterial 16S rRNA and 16S rDNA profiles, statistical analysis (t-test) of the 18S dynamics showed no mathematically significant shifts in fungal community diversity (H'; p=0.142) and dominance (S; p=0.392) during carcass decomposition, necessitating further investigations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification
NASA Astrophysics Data System (ADS)
He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue
2014-11-01
In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Fourier decomposition of payoff matrix for symmetric three-strategy games.
Szabó, György; Bodó, Kinga S; Allen, Benjamin; Nowak, Martin A
2014-10-01
In spatial evolutionary games the payoff matrices are used to describe pair interactions among neighboring players located on a lattice. Now we introduce a way how the payoff matrices can be built up as a sum of payoff components reflecting basic symmetries. For the two-strategy games this decomposition reproduces interactions characteristic to the Ising model. For the three-strategy symmetric games the Fourier components can be classified into four types representing games with self-dependent and cross-dependent payoffs, variants of three-strategy coordinations, and the rock-scissors-paper (RSP) game. In the absence of the RSP component the game is a potential game. The resultant potential matrix has been evaluated. The general features of these systems are analyzed when the game is expressed by the linear combinations of these components.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories
NASA Astrophysics Data System (ADS)
Cheng, Tao; Huang, Hua-Lin; Yang, Yuping
2016-01-01
By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.
General interference law for nonstationary, separable optical fields.
Manea, Vladimir
2009-09-01
An approach to the theory of partial coherence for nonstationary optical fields is presented. Starting with a spectral representation, a favorable decomposition of the optical signals is discussed that supports a natural extension of the mathematical formalism. The coherence functions are redefined, but still as temporal correlation functions, allowing the obtaining of a more general form of the interference law for partially coherent optical signals. The general theory is applied in some relevant particular cases of nonstationary interference, namely, with quasi-monochromatic beams of different frequencies and with phase-modulated quasi-monochromatic beams of similar frequency spectra. All the results of the general treatment are reducible to the ones given in the literature for the case of stationary interference.
45 CFR 671.7 - General issuance criteria.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 3 2012-10-01 2012-10-01 false General issuance criteria. 671.7 Section 671.7 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION WASTE REGULATION Permits § 671.7 General issuance criteria. (a) Upon receipt of a complete and properly executed application for a permit, the Director will...
45 CFR 671.7 - General issuance criteria.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 3 2014-10-01 2014-10-01 false General issuance criteria. 671.7 Section 671.7 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION WASTE REGULATION Permits § 671.7 General issuance criteria. (a) Upon receipt of a complete and properly executed application for a permit, the Director will...
45 CFR 671.7 - General issuance criteria.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 3 2013-10-01 2013-10-01 false General issuance criteria. 671.7 Section 671.7 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION WASTE REGULATION Permits § 671.7 General issuance criteria. (a) Upon receipt of a complete and properly executed application for a permit, the Director will...
Small bodies and the outer planets and Appendices 1 and 2
NASA Technical Reports Server (NTRS)
Davis, D. R.
1974-01-01
Correlations of asteroid spectral reflectivity characteristics with orbital parameters have been sought. Asteroid proper elements and extreme heliocentric distance were examined. Only general trends were noted, primarily red asteroids and asteroids with IR (.95 micron) absorption bands are concentrated toward the inner part of the belt. Also, asteroids with the pyroxene band tend to have larger proper eccentricities relative to non-banded asteroids.
Code of Federal Regulations, 2013 CFR
2013-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.671 General. (a) Each control must operate easily, smoothly, and positively enough to allow proper performance of its functions. (b) Controls must be arranged and identified to provide for...
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.671 General. (a) Each control must operate easily, smoothly, and positively enough to allow proper performance of its functions. (b) Controls must be arranged and identified to provide for...
Code of Federal Regulations, 2012 CFR
2012-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.671 General. (a) Each control must operate easily, smoothly, and positively enough to allow proper performance of its functions. (b) Controls must be arranged and identified to provide for...
Code of Federal Regulations, 2014 CFR
2014-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.671 General. (a) Each control must operate easily, smoothly, and positively enough to allow proper performance of its functions. (b) Controls must be arranged and identified to provide for...