Biney, Paul O; Gyamerah, Michael; Shen, Jiacheng; Menezes, Bruna
2015-03-01
A new multi-stage kinetic model has been developed for TGA pyrolysis of arundo, corn stover, sawdust and switch grass that accounts for the initial biomass weight (W0). The biomass were decomposed in a nitrogen atmosphere from 23°C to 900°C in a TGA at a single 20°C/min ramp rate in contrast with the isoconversion technique. The decomposition was divided into multiple stages based on the absolute local minimum values of conversion derivative, (dx/dT), obtained from DTG curves. This resulted in three decomposition stages for arundo, corn stover and sawdust and four stages for switch grass. A linearized multi-stage model was applied to the TGA data for each stage to determine the pre-exponential factor, activation energy, and reaction order. The activation energies ranged from 54.7 to 60.9 kJ/mol, 62.9 to 108.7 kJ/mol, and 18.4 to 257.9 kJ/mol for the first, second and the third decomposition stages respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Zhang, Jinzhi; Chen, Tianju; Wu, Jingli; Wu, Jinhu
2015-09-01
Thermal decomposition of six representative components of municipal solid waste (MSW, including lignin, printing paper, cotton, rubber, polyvinyl chloride (PVC) and cabbage) was investigated by thermogravimetric-mass spectroscopy (TG-MS) under steam atmosphere. Compared with TG and derivative thermogravimetric (DTG) curves under N2 atmosphere, thermal decomposition of MSW components under steam atmosphere was divided into pyrolysis and gasification stages. In the pyrolysis stage, the shapes of TG and DTG curves under steam atmosphere were almost the same with those under N2 atmosphere. In the gasification stage, the presence of steam led to a greater mass loss because of the steam partial oxidation of char residue. The evolution profiles of H2, CH4, CO and CO2 were well consistent with DTG curves in terms of appearance of peaks and relevant stages in the whole temperature range, and the steam partial oxidation of char residue promoted the generation of more gas products in high temperature range. The multi-Gaussian distributed activation energy model (DAEM) was proved plausible to describe thermal decomposition behaviours of MSW components under steam atmosphere. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Guo, Qiang; Qi, Liangang
2017-04-10
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Guo, Qiang; Qi, Liangang
2017-01-01
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
NASA Astrophysics Data System (ADS)
Han, Charles
Institute for Advanced Study, Shenzhen University, Shenzhen, China In memory of Professor John Kohn at this symposium, a time resolved SANS study for the early stage of spinodal decomposition kinetics of deuterated polycarbonate/poly(methylmethacrylate) blend will be reviewed which gives a clear proof of the Cahn-Hillard-Cook theory. This early stage of spinodal decomposition kinetics has been observed starting from the dimension (q-l) comparable to the single chain radius of gyration, Rg\\ , for a binary polymer mixture. The results provide an unequivocal quantitative measure of the virtual structure factor, S (q, ∞); the relationship of qm and qc through rate of growth, Cahn-plot analysis, and singularity in S (q, ∞); the growth of fluctuation of qRg <1 and intra-chain relaxation of qRg >1. More recent study of using mixed suspensions of polystyrene microspheres and poly(N-isopropylacrylamide) microgels as a molecular model system which has a long range repulsive interaction potential and a short range attractive potential, will also be discussed. In this model system, dynamic gelation, transition to soft glass state and cross-over to hard glass state will be demonstrated and compared with available theories for glass transition in structural materials. Acknowledgements go to: Polymers Division, and NCNR of NIST, and to ICCAS, Beijing, China. Also to my colleagues: M. Motowoka, H. Jinnai, T. Hashimoto, G.C. Yuan and H. Cheng.
LP and NLP decomposition without a master problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, D.; Lan, B.
We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less
NASA Astrophysics Data System (ADS)
Hao, Zhenhua; Cui, Ziqiang; Yue, Shihong; Wang, Huaxiang
2018-06-01
As an important means in electrical impedance tomography (EIT), multi-frequency phase-sensitive demodulation (PSD) can be viewed as a matched filter for measurement signals and as an optimal linear filter in the case of Gaussian-type noise. However, the additive noise usually possesses impulsive noise characteristics, so it is a challenging task to reduce the impulsive noise in multi-frequency PSD effectively. In this paper, an approach for impulsive noise reduction in multi-frequency PSD of EIT is presented. Instead of linear filters, a singular value decomposition filter is employed as the pre-stage filtering module prior to PSD, which has advantages of zero phase shift, little distortion, and a high signal-to-noise ratio (SNR) in digital signal processing. Simulation and experimental results demonstrated that the proposed method can effectively eliminate the influence of impulsive noise in multi-frequency PSD, and it was capable of achieving a higher SNR and smaller demodulation error.
Michaud, Jean-Philippe; Moreau, Gaétan
2011-01-01
Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salo, Heikki; Laurikainen, Eija; Laine, Jarkko
The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less
Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine
2016-08-18
We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.
Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.
2004-07-01
We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna
2017-07-01
Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.
Young Children's Thinking about Decomposition: Early Modeling Entrees to Complex Ideas in Science
ERIC Educational Resources Information Center
Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona
2013-01-01
This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included…
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
Marais-Werner, A; Myburgh, J; Meyer, A; Nienaber, W C; Steyn, M
2017-07-01
Burial of remains is an important factor when one attempts to establish the post-mortem interval as it reduces, and in extreme cases, excludes oviposition by Diptera species. This in turn leads to modification of the decomposition process. The aim of this study was to record decomposition patterns of buried remains using a pig model. The pattern of decomposition was evaluated at different intervals and recorded according to existing guidelines. In order to contribute to our knowledge on decomposition in different settings, a quantifiable approach was followed. Results indicated that early stages of decomposition occurred rapidly for buried remains within 7-33 days. Between 14 and 33 days, buried pigs displayed common features associated with the early to middle stages of decomposition, such as discoloration and bloating. From 33 to 90 days advanced decomposition manifested on the remains, and pigs then reached a stage of advanced decomposition where little change was observed in the next ±90-183 days after interment. Throughout this study, total body scores remained higher for surface remains. Overall, buried pigs followed a similar pattern of decomposition to those of surface remains, although at a much slower rate when compared with similar post-mortem intervals in surface remains. In this study, the decomposition patterns and rates of buried remains were mostly influenced by limited insect activity and adipocere formation which reduces the rate of decay in a conducive environment (i.e. burial in soil).
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches
NASA Astrophysics Data System (ADS)
Egging, Rudolf Gerardus
This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org
Model reconstruction using POD method for gray-box fault detection
NASA Technical Reports Server (NTRS)
Park, H. G.; Zak, M.
2003-01-01
This paper describes using Proper Orthogonal Decomposition (POD) method to create low-order dynamical models for the Model Filter component of Beacon-based Exception Analysis for Multi-missions (BEAM).
NASA Astrophysics Data System (ADS)
Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng
2009-02-01
Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.
Reactivity continuum modeling of leaf, root, and wood decomposition across biomes
NASA Astrophysics Data System (ADS)
Koehler, Birgit; Tranvik, Lars J.
2015-07-01
Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.
The initial changes of fat deposits during the decomposition of human and pig remains.
Notter, Stephanie J; Stuart, Barbara H; Rowe, Rebecca; Langlois, Neil
2009-01-01
The early stages of adipocere formation in both pig and human adipose tissue in aqueous environments have been investigated. The aims were to determine the short-term changes occurring to fat deposits during decomposition and to ascertain the suitability of pigs as models for human decomposition. Subcutaneous adipose tissue from both species after immersion in distilled water for up to six months was compared using Fourier transform infrared spectroscopy, gas chromatography-mass spectrometry and inductively coupled plasma-mass spectrometry. Changes associated with decomposition were observed, but no adipocere was formed during the initial month of decomposition for either tissue type. Early-stage adipocere formation in pig samples during later months was detected. The variable time courses for adipose tissue decomposition were attributed to differences in the distribution of total fatty acids between species. Variations in the amount of sodium, potassium, calcium, and magnesium were also detected between species. The study shows that differences in total fatty acid composition between species need to be considered when interpreting results from experimental decomposition studies using pigs as human body analogs.
Peng, Yan; Yang, Wanqin; Yue, Kai; Tan, Bo; Huang, Chunping; Xu, Zhenfeng; Ni, Xiangyin; Zhang, Li; Wu, Fuzhong
2018-06-17
Plant litter decomposition in forested soil and watershed is an important source of phosphorus (P) for plants in forest ecosystems. Understanding P dynamics during litter decomposition in forested aquatic and terrestrial ecosystems will be of great importance for better understanding nutrient cycling across forest landscape. However, despite massive studies addressing litter decomposition have been carried out, generalizations across aquatic and terrestrial ecosystems regarding the temporal dynamics of P loss during litter decomposition remain elusive. We conducted a two-year field experiment using litterbag method in both aquatic (streams and riparian zones) and terrestrial (forest floors) ecosystems in an alpine forest on the eastern Tibetan Plateau. By using multigroup comparisons of structural equation modeling (SEM) method with different litter mass-loss intervals, we explicitly assessed the direct and indirect effects of several biotic and abiotic drivers on P loss across different decomposition stages. The results suggested that (1) P concentration in decomposing litter showed similar patterns of early increase and later decrease across different species and ecosystems types; (2) P loss shared a common hierarchy of drivers across different ecosystems types, with litter chemical dynamics mainly having direct effects but environment and initial litter quality having both direct and indirect effects; (3) when assessing at the temporal scale, the effects of initial litter quality appeared to increase in late decomposition stages, while litter chemical dynamics showed consistent significant effects almost in all decomposition stages across aquatic and terrestrial ecosystems; (4) microbial diversity showed significant effects on P loss, but its effects were lower compared with other drivers. Our results highlight the importance of including spatiotemporal variations and indicate the possibility of integrating aquatic and terrestrial decomposition into a common framework for future construction of models that account for the temporal dynamics of P in decomposing litter. Copyright © 2018 Elsevier B.V. All rights reserved.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition
NASA Astrophysics Data System (ADS)
Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen
2017-04-01
Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.
NASA Astrophysics Data System (ADS)
Fu, Yao; Song, Jeong-Hoon
2014-08-01
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Ogawa, Mitsuteru; Yoshida, Naohiro
2005-11-01
The intramolecular distribution of stable isotopes in nitrous oxide that is emitted during coal combustion was analyzed using an isotopic ratio mass spectrometer equipped with a modified ion collector system (IRMS). The coal was combusted in a test furnace fitted with a single burner and the flue gases were collected at the furnace exit following removal of SO(x), NO(x), and H2O in order to avoid the formation of artifact nitrous oxide. The nitrous oxide in the flue gases proved to be enriched in 15N relative to the fuel coal. In air-staged combustion experiments, the staged air ratio was controlled over a range of 0 (unstaged combustion), 20%, and 30%. As the staged air ratio increased, the delta15N and delta18O of the nitrous oxide in the flue gases became depleted. The central nitrogen of the nitrous oxide molecule, N(alpha), was enriched in 15N relative to that occupying the end position of the molecule, N(beta), but this preference, expressed as delta15N(alpha)-delta15N(beta), decreased with the increase in the staged air ratio. Thermal decomposition and hydrogen reduction experiments carried out using a tube reactor allowed qualitative estimates of the kinetic isotope effects that occurred during the decomposition of the nitrous oxide and quantitative estimates of the extent to which the nitrous oxide had decomposed. The site preference of nitrous oxide increased with the extent of the decomposition reactions. Assuming that no site preference exists in nitrous oxide before decomposition, the behavior of nitrous oxide in the test combustion furnace was analyzed using the Rayleigh equation based on a single distillation model. As a result, the extent of decomposition of nitrous oxide was estimated as 0.24-0.26 during the decomposition reaction governed by the thermal decomposition and as 0.35-0.38 during the decomposition reaction governed by the hydrogen reduction in staged combustion. The intramolecular distribution of nitrous oxide can be a valuable parameter to estimate the extent of decomposition reaction and to understand the reaction pathway of nitrous oxide at the high temperature.
Wolfe, Benjamin E.; Tulloss, Rodham E.; Pringle, Anne
2012-01-01
Microbial symbioses have evolved repeatedly across the tree of life, but the genetic changes underlying transitions to symbiosis are largely unknown, especially for eukaryotic microbial symbionts. We used the genus Amanita, an iconic group of mushroom-forming fungi engaged in ectomycorrhizal symbioses with plants, to identify both the origins and potential genetic changes maintaining the stability of this mutualism. A multi-gene phylogeny reveals one origin of the symbiosis within Amanita, with a single transition from saprotrophic decomposition of dead organic matter to biotrophic dependence on host plants for carbon. Associated with this transition are the losses of two cellulase genes, each of which plays a critical role in extracellular decomposition of organic matter. However a third gene, which acts at later stages in cellulose decomposition, is retained by many, but not all, ectomycorrhizal species. Experiments confirm that symbiotic Amanita species have lost the ability to grow on complex organic matter and have therefore lost the capacity to live in forest soils without carbon supplied by a host plant. Irreversible losses of decomposition pathways are likely to play key roles in the evolutionary stability of these ubiquitous mutualisms. PMID:22815710
MacAulay, Lauren E; Barr, Darryl G; Strongman, Doug B
2009-03-01
Previous studies document characteristics of gunshot wounds shortly after they were inflicted. This study was conducted to determine if the early stages of decomposition obscure or alter the physical surface characteristics of gunshot wounds, thereby affecting the quantity and quality of information retrievable from such evidence. The study was conducted in August and September, 2005 in Nova Scotia, Canada in forested and exposed environments. Recently killed pigs were used as research models and were shot six times each at three different ranges (contact, 2.5 cm, and 1.5 m). Under these test conditions, the gunshot wounds maintained the characteristics unique to each gunshot range and changes that occurred during decomposition were not critical to the interpretation of the evidence. It was concluded that changes due to decomposition under the conditions tested would not affect the collection and interpretation of gunshot wound evidence until the skin was degraded in the late active or advanced decay stage of decomposition.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François
2015-06-01
In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.
38th JANNAF Combustion Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)
2002-01-01
This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.
Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data
NASA Astrophysics Data System (ADS)
Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan
2014-10-01
The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
Decomposition of Multi-player Games
NASA Astrophysics Data System (ADS)
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
A Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success
ERIC Educational Resources Information Center
Luong, Ming; Stevens, Jeff
2015-01-01
The Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success, a theoretical stages-of-growth model, explains long-term success in IT outsourcing relationships. Research showed the IT outsourcing relationship life cycle consists of four distinct, sequential stages: contract, transition, support, and partnership. The model was…
NASA Astrophysics Data System (ADS)
Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng
2016-12-01
Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu
2014-08-07
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less
Materials Degradation in the Jovian Radiation Environment
NASA Technical Reports Server (NTRS)
Miloshevsky, Gennady; Caffrey, Jarvis A.; Jones, Jonathan E.; Zoladz, Thomas F.
2017-01-01
The radiation environment of Jupiter represents a significant hazard for Europa Lander deorbit stage components, and presents a significant potential mission risk. The radiolytic degradation of ammonium perchlorate (AP) oxidizer in solid propellants may affect its properties and performance. The Monte Carlo code MONSOL was used for modeling of laboratory experiments on the electron irradiation of propellant samples. An approach for flattening dose profiles along the depth of irradiated samples is proposed. Depth-dose distributions produced by Jovian electrons in multi-layer slabs of materials are calculated. It is found that the absorbed dose in a particular slab is significantly affected by backscattered electrons and photons from neighboring slabs. The dose and radiolytic decomposition of AP crystals are investigated and radiation-induced chemical yields and weight percent of radical products are reported.
Young Children's Thinking About Decomposition: Early Modeling Entrees to Complex Ideas in Science
NASA Astrophysics Data System (ADS)
Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona
2013-10-01
This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included in-class observations of different types of soil and soil profiling, visits to the school's compost bin, structured observations of decaying organic matter of various kinds, study of organisms that live in the soil, and models of environmental conditions that affect rates of decomposition. Both before and after instruction, students completed a written performance assessment that asked them to reason about the process of decomposition. Additional information was gathered through one-on-one interviews with six focus students who represented variability of performance across the class. During instruction, researchers collected video of classroom activity, student science journal entries, and charts and illustrations produced by the teacher. After instruction, the first-grade students showed a more nuanced understanding of the composition and variability of soils, the role of visible organisms in decomposition, and environmental factors that influence rates of decomposition. Through a variety of representational devices, including drawings, narrative records, and physical models, students came to regard decomposition as a process, rather than simply as an end state that does not require explanation.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
NASA Astrophysics Data System (ADS)
Lv, Gangming; Zhu, Shihua; Hui, Hui
Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.
Zhao, Bing; Xu, Xinyang; Li, Haibo; Chen, Xi; Zeng, Fanqiang
2018-01-01
Hazelnut shell, as novel biomass, has lower ash content and abundant hydrocarbon, which can be utilized resourcefully with municipal sewage sludge (MSS) by co-pyrolyisis to decrease total content of pollution. The co-pyrolysis of MSS and hazelnut shell blend was analyzed by a method of multi-heating rates and different blend ratios with TG-DTG-MS under N 2 atmosphere. The apparent activation energy of co-pyrolysis was calculated by three iso-conversional methods. Satava-Sestak method was used to determine mechanism function G(α) of co-pyrolysis, and Lorentzian function was used to simulate multi-peaks curves. The results showed there were four thermal decomposition stages, and the biomass were cracked and evolved at different temperature ranges. The apparent activation energy increased from 123.99 to 608.15kJ/mol. The reaction mechanism of co-pyrolysis is random nucleation and nuclei growth. The apparent activation energy and mechanism function afford a theoretical groundwork for co-pyrolysis technology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less
Modular neural networks: a survey.
Auda, G; Kamel, M
1999-04-01
Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented.
Influence of gamma-irradiation on the non-isothermal decomposition of calcium-gadolinium oxalate
NASA Astrophysics Data System (ADS)
Moharana, S. C.; Praharaj, J.; Bhatta, D.
Thermal decomposition of co-precipitated unirradiated and irradiated Ca-Gd oxalate has been studied by adopting differential thermal analysis (DTA) and thermogravimetric (TG) techniques. The reaction occurs through two stages corresponding to the decomposition of gadolinium oxalate (Gd-Ox) followed by that of calcium oxalate (Ca-Ox). The kinetic parameters for both the stages are calculated by using solid state reaction models and Coats-Redfern's equation. The co-precipitation as well as irradiation alter the DTA peak temperatures and the kinetic parameters of Ca-Ox. The decomposition of Gd-Ox follows the two dimensional Contracting area (R-2) mechanism, while that of Ca-Ox follows the Avrami-Erofeev (A(2)) mechanism (n =2), which are also exhibited by the co-precipitated and irradiated samples. Co-precipitation decreases the energy of activation and the pre-exponential factor of the individual components but the reverse phenomenon takes place upon irradiation of the co-precipitate. The mechanisms underlying the phenomena are explored.
Perakis, Steven S.; Matkins, Joselin J.; Hibbs, David E.
2012-01-01
High tissue nitrogen (N) accelerates decomposition of high-quality leaf litter in the early phases of mass loss, but the influence of initial tissue N variation on the decomposition of lignin-rich litter is less resolved. Because environmental changes such as atmospheric N deposition and elevated CO2 can alter tissue N levels within species more rapidly than they alter the species composition of ecosystems, it is important to consider how within-species variation in tissue N may shape litter decomposition and associated N dynamics. Douglas-fir (Pseudotsuga menziesii ) is a widespread lignin-rich conifer that dominates forests of high carbon (C) storage across western North America, and displays wide variation in tissue and litter N that reflects landscape variation in soil N. We collected eight unique Douglas-fir litter sources that spanned a two-fold range in initial N concentrations (0.67–1.31%) with a narrow range of lignin (29–35%), and examined relationships between initial litter chemistry, decomposition, and N dynamics in both ambient and N fertilized plots at four sites over 3 yr. High initial litter N slowed decomposition rates in both early (0.67 yr) and late (3 yr) stages in unfertilized plots. Applications of N fertilizer to litters accelerated early-stage decomposition, but slowed late-stage decomposition, and most strongly affected low-N litters, which equalized decomposition rates across litters regardless of initial N concentrations. Decomposition of N-fertilized litters correlated positively with initial litter manganese (Mn) concentrations, with litter Mn variation reflecting faster turnover of canopy foliage in high N sites, producing younger litterfall with high N and low Mn. Although both internal and external N inhibited decomposition at 3 yr, most litters exhibited net N immobilization, with strongest immobilization in low-N litter and in N-fertilized plots. Our observation for lignin-rich litter that high initial N can slow decomposition yet accelerate N release differs from findings where litter quality variation across species promotes coupled C and N release during decomposition. We suggest reevaluation of ecosystem models and projected global change effects to account for a potential decoupling of ecosystem C and N feedbacks through litter decomposition in lignin-rich conifer forests.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.
2012-04-01
The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.
Storage assignment optimization in a multi-tier shuttle warehousing system
NASA Astrophysics Data System (ADS)
Wang, Yanyan; Mou, Shandong; Wu, Yaohua
2016-03-01
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
The trait contribution to wood decomposition rates of 15 Neotropical tree species.
van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C
2010-12-01
The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.
Bienvenu, François; Akçay, Erol; Legendre, Stéphane; McCandlish, David M
2017-06-01
Matrix projection models are a central tool in many areas of population biology. In most applications, one starts from the projection matrix to quantify the asymptotic growth rate of the population (the dominant eigenvalue), the stable stage distribution, and the reproductive values (the dominant right and left eigenvectors, respectively). Any primitive projection matrix also has an associated ergodic Markov chain that contains information about the genealogy of the population. In this paper, we show that these facts can be used to specify any matrix population model as a triple consisting of the ergodic Markov matrix, the dominant eigenvalue and one of the corresponding eigenvectors. This decomposition of the projection matrix separates properties associated with lineages from those associated with individuals. It also clarifies the relationships between many quantities commonly used to describe such models, including the relationship between eigenvalue sensitivities and elasticities. We illustrate the utility of such a decomposition by introducing a new method for aggregating classes in a matrix population model to produce a simpler model with a smaller number of classes. Unlike the standard method, our method has the advantage of preserving reproductive values and elasticities. It also has conceptually satisfying properties such as commuting with changes of units. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Yan; Wang, Zhihui
2015-12-01
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
Theoretical studies of the decomposition mechanisms of 1,2,4-butanetriol trinitrate.
Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo
2017-12-06
Density functional theory (DFT) and canonical variational transition-state theory combined with a small-curvature tunneling correction (CVT/SCT) were used to explore the decomposition mechanisms of 1,2,4-butanetriol trinitrate (BTTN) in detail. The results showed that the γ-H abstraction reaction is the initial pathway for autocatalytic BTTN decomposition. The three possible hydrogen atom abstraction reactions are all exothermic. The rate constants for autocatalytic BTTN decomposition are 3 to 10 40 times greater than the rate constants for the two unimolecular decomposition reactions (O-NO 2 cleavage and HONO elimination). The process of BTTN decomposition can be divided into two stages according to whether the NO 2 concentration is above a threshold value. HONO elimination is the main reaction channel during the first stage because autocatalytic decomposition requires NO 2 and the concentration of NO 2 is initially low. As the reaction proceeds, the concentration of NO 2 gradually increases; when it exceeds the threshold value, the second stage begins, with autocatalytic decomposition becoming the main reaction channel.
Associational Patterns of Scavenger Beetles to Decomposition Stages.
Zanetti, Noelia I; Visciarelli, Elena C; Centeno, Nestor D
2015-07-01
Beetles associated with carrion play an important role in recycling organic matter in an ecosystem. Four experiments on decomposition, one per season, were conducted in a semirural area in Bahía Blanca, Argentina. Melyridae are reported for the first time of forensic interest. Apart from adults and larvae of Scarabaeidae, thirteen species and two genera of other coleopteran families are new forensic records in Argentina. Diversity, abundance, and species composition of beetles showed differences between stages and seasons. Our results differed from other studies conducted in temperate regions. Four guilds and succession patterns were established in relation to decomposition stages and seasons. Dermestidae (necrophages) predominated in winter during the decomposition process; Staphylinidae (necrophiles) in Fresh and Bloat stages during spring, summer, and autumn; and Histeridae (necrophiles) and Cleridae (omnivores) in the following stages during those seasons. Finally, coleopteran activity, diversity and abundance, and decomposition rate change with biogeoclimatic characteristics, which is of significance in forensics. © 2015 American Academy of Forensic Sciences.
Study of CFB Simulation Model with Coincidence at Multi-Working Condition
NASA Astrophysics Data System (ADS)
Wang, Z.; He, F.; Yang, Z. W.; Li, Z.; Ni, W. D.
A circulating fluidized bed (CFB) two-stage simulation model was developed. To realize the model results coincident with the design value or real operation value at specified multi-working conditions and with capability of real-time calculation, only the main key processes were taken into account and the dominant factors were further abstracted out of these key processes. The simulation results showed a sound accordance at multi-working conditions, and confirmed the advantage of the two-stage model over the original single-stage simulation model. The combustion-support effect of secondary air was investigated using the two-stage model. This model provides a solid platform for investigating the pant-leg structured CFB furnace, which is now under design for a supercritical power plant.
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2017-12-01
Most existing soil carbon cycle models have modeled the moisture and temperature dependence of soil respiration using deterministic response functions. However, empirical data suggest abundant variability in both of these dependencies. We here use the recently developed SUPECA (Synthesizing Unit and Equilibrium Chemistry Approximation) theory and a published dynamic energy budget based microbial model to investigate how soil carbon decomposition responds to changes in soil moisture and temperature under the influence of organo-mineral interactions. We found that both the temperature and moisture responses are hysteretic and cannot be represented by deterministic functions. We then evaluate how the multi-scale variability in temperature and moisture forcing affect soil carbon decomposition. Our results indicate that when the model is run in scenarios mimicking laboratory incubation experiments, the often-observed temperature and moisture response functions can be well reproduced. However, when such response functions are used for model extrapolation involving more transient variability in temperature and moisture forcing (as found in real ecosystems), the dynamic model that explicitly accounts for hysteresis in temperature and moisture dependency produces significantly different estimations of soil carbon decomposition, suggesting there are large biases in models that do not resolve such hysteresis. We call for more studies on organo-mineral interactions to improve modeling of such hysteresis.
USDA-ARS?s Scientific Manuscript database
Two Source Model (TSM) calculates the heat and water exchange and interaction between soil-atmosphere and vegetation-atmosphere separately. This is achieved through decomposition of radiometric surface temperature to soil and vegetation component temperatures either from multi-angular remotely sense...
Yuan, Jie; Zheng, Xiaofeng; Cheng, Fei; Zhu, Xian; Hou, Lin; Li, Jingxia; Zhang, Shuoxin
2017-10-24
Historically, intense forest hazards have resulted in an increase in the quantity of fallen wood in the Qinling Mountains. Fallen wood has a decisive influence on the nutrient cycling, carbon budget and ecosystem biodiversity of forests, and fungi are essential for the decomposition of fallen wood. Moreover, decaying dead wood alters fungal communities. The development of high-throughput sequencing methods has facilitated the ongoing investigation of relevant molecular forest ecosystems with a focus on fungal communities. In this study, fallen wood and its associated fungal communities were compared at different stages of decomposition to evaluate relative species abundance and species diversity. The physical and chemical factors that alter fungal communities were also compared by performing correspondence analysis according to host tree species across all stages of decomposition. Tree species were the major source of differences in fungal community diversity at all decomposition stages, and fungal communities achieved the highest levels of diversity at the intermediate and late decomposition stages. Interactions between various physical and chemical factors and fungal communities shared the same regulatory mechanisms, and there was no tree species-specific influence. Improving our knowledge of wood-inhabiting fungal communities is crucial for forest ecosystem conservation.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
The effect of body size on the rate of decomposition in a temperate region of South Africa.
Sutherland, A; Myburgh, J; Steyn, M; Becker, P J
2013-09-10
Forensic anthropologists rely on the state of decomposition of a body to estimate the post-mortem-interval (PMI) which provides information about the natural events and environmental forces that could have affected the remains after death. Various factors are known to influence the rate of decomposition, among them temperature, rainfall and exposure of the body. However, conflicting reports appear in the literature on the effect of body size on the rate of decay. The aim of this project was to compare decomposition rates of large pigs (Sus scrofa; 60-90 kg), with that of small pigs (<35 kg), to assess the influence of body size on decomposition rates. For the decomposition rates of small pigs, 15 piglets were assessed three times per week over a period of three months during spring and early summer. Data collection was conducted until complete skeletonization occurred. Stages of decomposition were scored according to separate categories for each anatomical region, and the point values for each region were added to determine the total body score (TBS), which represents the overall stage of decomposition for each pig. For the large pigs, data of 15 pigs were used. Scatter plots illustrating the relationships between TBS and PMI as well as TBS and accumulated degree days (ADD) were used to assess the pattern of decomposition and to compare decomposition rates between small and large pigs. Results indicated that rapid decomposition occurs during the early stages of decomposition for both samples. Large pigs showed a plateau phase in the course of advanced stages of decomposition, during which decomposition was minimal. A similar, but much shorter plateau was reached by small pigs of >20 kg at a PMI of 20-25 days, after which decomposition commenced swiftly. This was in contrast to the small pigs of <20 kg, which showed no plateau phase and their decomposition rates were swift throughout the duration of the study. Overall, small pigs decomposed 2.82 times faster than large pigs, indicating that body size does have an effect on the rate of decomposition. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
He, Y.; Zhuang, Q.; Harden, Jennifer W.; McGuire, A. David; Fan, Z.; Liu, Y.; Wickland, Kimberly P.
2014-01-01
The large amount of soil carbon in boreal forest ecosystems has the potential to influence the climate system if released in large quantities in response to warming. Thus, there is a need to better understand and represent the environmental sensitivity of soil carbon decomposition. Most soil carbon decomposition models rely on empirical relationships omitting key biogeochemical mechanisms and their response to climate change is highly uncertain. In this study, we developed a multi-layer microbial explicit soil decomposition model framework for boreal forest ecosystems. A thorough sensitivity analysis was conducted to identify dominating biogeochemical processes and to highlight structural limitations. Our results indicate that substrate availability (limited by soil water diffusion and substrate quality) is likely to be a major constraint on soil decomposition in the fibrous horizon (40–60% of soil organic carbon (SOC) pool size variation), while energy limited microbial activity in the amorphous horizon exerts a predominant control on soil decomposition (>70% of SOC pool size variation). Elevated temperature alleviated the energy constraint of microbial activity most notably in amorphous soils, whereas moisture only exhibited a marginal effect on dissolved substrate supply and microbial activity. Our study highlights the different decomposition properties and underlying mechanisms of soil dynamics between fibrous and amorphous soil horizons. Soil decomposition models should consider explicitly representing different boreal soil horizons and soil–microbial interactions to better characterize biogeochemical processes in boreal forest ecosystems. A more comprehensive representation of critical biogeochemical mechanisms of soil moisture effects may be required to improve the performance of the soil model we analyzed in this study.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
A benders decomposition approach to multiarea stochastic distributed utility planning
NASA Astrophysics Data System (ADS)
McCusker, Susan Ann
Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three individual models. These constraints are presented as a heuristic for the given examples; future research will investigate conditions for convergence. A ten-year multi-area example demonstrates the decomposition approach and suggests the ability of DRs and new transmission to modify capacity additions and production costs by changing demand and power flows. Results demonstrate that DR and new transmission options may lead to greater capacity additions, but resulting production cost savings more than offset extra capacity costs.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
The influence of preburial insect access on the decomposition rate.
Bachmann, Jutta; Simmons, Tal
2010-07-01
This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p < 0.001) in decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus
NASA Astrophysics Data System (ADS)
Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha
2018-02-01
A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.
Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S
2015-01-01
Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Cockle, Diane Lyn; Bell, Lynne S
2017-03-01
Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
Exploring Galaxy Formation and Evolution via Structural Decomposition
NASA Astrophysics Data System (ADS)
Kelvin, Lee; Driver, Simon; Robotham, Aaron; Hill, David; Cameron, Ewan
2010-06-01
The Galaxy And Mass Assembly (GAMA) structural decomposition pipeline (GAMA-SIGMA Structural Investigation of Galaxies via Model Analysis) will provide multi-component information for a sample of ~12,000 galaxies across 9 bands ranging from near-UV to near-IR. This will allow the relationship between structural properties and broadband, optical-to-near-IR, spectral energy distributions of bulge, bar, and disk components to be explored, revealing clues as to the history of baryonic mass assembly within a hierarchical clustering framework. Data is initially taken from the SDSS & UKIDSS-LAS surveys to test the robustness of our automated decomposition pipeline. This will eventually be replaced with the forthcoming higher-resolution VST & VISTA surveys data, expanding the sample to ~30,000 galaxies.
A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification
NASA Astrophysics Data System (ADS)
He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue
2014-11-01
In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.
2010-01-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer’s disease risk under a verbal paired associates task. We found a indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, that was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. PMID:20227508
Caffo, Brian S; Crainiceanu, Ciprian M; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H; Bassett, Susan Spear; Pekar, James J
2010-07-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer's disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer's disease risk under a verbal paired associates task. We found an indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, which was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Lv, Yong; Song, Gangbing
2018-01-01
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510
Yuan, Rui; Lv, Yong; Song, Gangbing
2018-04-16
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.
Zhou, Quancheng; Sheng, Guihua
2012-01-01
The thermal decomposition of Perilla frutescens polysaccharide was examined by thermogravimetry, differential thermogravimetry, and differential thermal analysis. The results showed that the mass loss of the substance proceeded in three steps. The first stage can be attributed to the expulsion of the water from ambient temperature to 182°C. The second stage corresponded to devolatilization from 182°C to 439°C. The residue slowly degraded in the third stage. The weight loss in air is faster than that in nitrogen, because the oxygen in air accelerated the pyrolytic reaction speed reaction. The heating rate significantly affected the pyrolysis of the sample. Similar activation energies of the degradation process (210–211 kJ mol−1) were obtained by the FWO, KAS, and Popescu techniques. According to Popescu mechanism functions, the possible kinetic model was estimated to be Avrami–Erofeev 20 g(α) = [−ln(1–α)]4. PMID:23300715
A density functional theory study of the decomposition mechanism of nitroglycerin.
Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo
2017-08-21
The detailed decomposition mechanism of nitroglycerin (NG) in the gas phase was studied by examining reaction pathways using density functional theory (DFT) and canonical variational transition state theory combined with a small-curvature tunneling correction (CVT/SCT). The mechanism of NG autocatalytic decomposition was investigated at the B3LYP/6-31G(d,p) level of theory. Five possible decomposition pathways involving NG were identified and the rate constants for the pathways at temperatures ranging from 200 to 1000 K were calculated using CVT/SCT. There was found to be a lower energy barrier to the β-H abstraction reaction than to the α-H abstraction reaction during the initial step in the autocatalytic decomposition of NG. The decomposition pathways for CHOCOCHONO 2 (a product obtained following the abstraction of three H atoms from NG by NO 2 ) include O-NO 2 cleavage or isomer production, meaning that the autocatalytic decomposition of NG has two reaction pathways, both of which are exothermic. The rate constants for these two reaction pathways are greater than the rate constants for the three pathways corresponding to unimolecular NG decomposition. The overall process of NG decomposition can be divided into two stages based on the NO 2 concentration, which affects the decomposition products and reactions. In the first stage, the reaction pathway corresponding to O-NO 2 cleavage is the main pathway, but the rates of the two autocatalytic decomposition pathways increase with increasing NO 2 concentration. However, when a threshold NO 2 concentration is reached, the NG decomposition process enters its second stage, with the two pathways for NG autocatalytic decomposition becoming the main and secondary reaction pathways.
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Design and testing of a novel multi-stroke micropositioning system with variable resolutions.
Xu, Qingsong
2014-02-01
Multi-stroke stages are demanded in micro-/nanopositioning applications which require smaller and larger motion strokes with fine and coarse resolutions, respectively. This paper presents the conceptual design of a novel multi-stroke, multi-resolution micropositioning stage driven by a single actuator for each working axis. It eliminates the issue of the interference among different drives, which resides in conventional multi-actuation stages. The stage is devised based on a fully compliant variable stiffness mechanism, which exhibits unequal stiffnesses in different strokes. Resistive strain sensors are employed to offer variable position resolutions in the different strokes. To quantify the design of the motion strokes and coarse/fine resolution ratio, analytical models are established. These models are verified through finite-element analysis simulations. A proof-of-concept prototype XY stage is designed, fabricated, and tested to demonstrate the feasibility of the presented ideas. Experimental results of static and dynamic testing validate the effectiveness of the proposed design.
Pharmacokinetic analysis of multi PEG-theophylline conjugates.
Grassi, Mario; Bonora, Gian Maria; Drioli, Sara; Cateni, Francesca; Zacchigna, Marina
2012-10-01
In the attempt of prolonging the effect of drugs, a new branched, high-molecular weight multimeric poly(ethylene glycol) (MultiPEG), synthesized with a simple assembling procedure that devised the introduction of functional groups with divergent and selective reactivity, was employed as drug carrier. In particular, the attention was focused on the study of theophylline (THEO) and THEO-MultiPEG conjugates pharmacokinetic after oral administration in rabbit. Pharmacokinetic behavior was studied according to an ad hoc developed mathematical model accounting for THEO-MultiPEG in vivo absorption and decomposition into drug (THEO) and carrier (MultiPEG). The branched high-molecular weight MultiPEG proved to be a reliable drug delivery system able to prolong theophylline staying in the blood after oral administration of a THEO-MultiPEG solution. The analysis of experimental data by means of the developed mathematical model revealed that the prolongation of THEO effect was essentially due to the low THEO-MultiPEG permeability in comparison to that of pure THEO. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Mora-Gómez, Juanita; Elosegi, Arturo; Duarte, Sofia; Cássio, Fernanda; Pascoal, Cláudia; Romaní, Anna M
2016-08-01
Microorganisms are key drivers of leaf litter decomposition; however, the mechanisms underlying the dynamics of different microbial groups are poorly understood. We investigated the effects of seasonal variation and invertebrates on fungal and bacterial dynamics, and on leaf litter decomposition. We followed the decomposition of Populus nigra litter in a Mediterranean stream through an annual cycle, using fine and coarse mesh bags. Irrespective of the season, microbial decomposition followed two stages. Initially, bacterial contribution to total microbial biomass was higher compared to later stages, and it was related to disaccharide and lignin degradation; in a later stage, bacteria were less important and were associated with hemicellulose and cellulose degradation, while fungi were related to lignin decomposition. The relevance of microbial groups in decomposition differed among seasons: fungi were more important in spring, whereas in summer, water quality changes seemed to favour bacteria and slowed down lignin and hemicellulose degradation. Invertebrates influenced litter-associated microbial assemblages (especially bacteria), stimulated enzyme efficiencies and reduced fungal biomass. We conclude that bacterial and fungal assemblages play distinctive roles in microbial decomposition and differ in their sensitivity to environmental changes, ultimately affecting litter decomposition, which might be particularly relevant in highly seasonal ecosystems, such as intermittent streams. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Chemical Kinetics of the TPS and Base Bleeding During Flight Test
NASA Technical Reports Server (NTRS)
Osipov, Viatcheslav; Ponizhovskaya, Ekaterina; Hafiychuck, Halyna; Luchinsky, Dmitry; Smelyanskiy, Vadim; Dagostino, Mark; Canabal, Francisco; Mobley, Brandon L.
2012-01-01
The present research deals with thermal degradation of polyurethane foam (PUF) during flight test. Model of thermal decomposition was developed that accounts for polyurethane kinetics parameters extracted from thermogravimetric analyses and radial heat losses to the surrounding environment. The model predicts mass loss of foam, the temperature and kinetic of release of the exhaust gases and char as function of heat and radiation loads. When PUF is heated, urethane bond break into polyol and isocyanate. In the first stage, isocyanate pyrolyses and oxidizes. As a result, the thermo-char and oil droplets (yellow smoke) are released. In the second decomposition stage, pyrolysis and oxidization of liquid polyol occur. Next, the kinetics of chemical compound release and the information about the reactions occurring in the base area are coupled to the CFD simulations of the base flow in a single first stage motor vertically stacked vehicle configuration. The CFD simulations are performed to estimate the contribution of the hot out-gassing, chemical reactions, and char oxidation to the temperature rise of the base flow. The results of simulations are compared with the flight test data.
Towards a predictive thermal explosion model for energetic materials
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.; Maienschein, Jon L.; Wardell, Jeffrey F.
2005-01-01
We present an overview of models and computational strategies for simulating the thermal response of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the behavior of energetic materials systems exposed to strong thermal environments such as fires. We apply these models and computational techniques to a thermal explosion experiment involving the slow heating of a confined explosive. The model includes the transition from slow heating to rapid deflagration in which the time scale decreases from days to hundreds of microseconds. Thermal, mechanical, and chemical effects are modeled during all phases of this process. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics. In addition, we investigate the sensitivity of wall expansion rates to numerical strategies and parameters. Results from a one-dimensional model show that violence is influenced by the presence of a gap between the explosive and container. In addition, a comparison is made between 2D model and measured results for the explosion temperature and tube wall expansion profiles.
Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system
NASA Astrophysics Data System (ADS)
Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei
2017-08-01
The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.
Decomposition Rate and Pattern in Hanging Pigs.
Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal
2015-09-01
Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass. © 2015 American Academy of Forensic Sciences.
Evaluation of RISAT-1 SAR data for tropical forestry applications
NASA Astrophysics Data System (ADS)
Padalia, Hitendra; Yadav, Sadhana
2017-01-01
India launched C band (5.35 GHz) RISAT-1 (Radar Imaging Satellite-1) on 26th April, 2012, equipped with the capability to image the Earth at multiple-resolutions and -polarizations. In this study the potential of Fine Resolution Strip (FRS) modes of RISAT-1 was evaluated for characterization and classification forests and estimation of biomass of early growth stages. The study was carried out at the two sites located in the foothills of western Himalaya, India. The pre-processing and classification of FRS-1 SAR data was performed using PolSAR Pro ver. 5.0 software. The scattering mechanisms derived from m-chi decomposition of FRS-1 RH/RV data were found physically meaningful for the characterization of various surface features types. The forest and land use type classification of the study area was developed applying Support Vector Machine (SVM) algorithm on FRS-1 derived appropriate polarimetric features. The biomass of early growth stages of Eucalyptus (up to 60 ton/ha) was estimated developing a multi-linear regression model using C band σ0 HV and σ0 HH backscatter information. The study outcomes has promise for wider application of RISAT-1 data for forest cover monitoring, especially for the tropical regions.
NASA Astrophysics Data System (ADS)
Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.
2012-03-01
Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.
A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.
Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V
2014-01-01
The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Ferreira, M Teresa; Cunha, Eugénia
2013-03-10
Post mortem interval estimation is crucial in forensic sciences for both positive identification and reconstruction of perimortem events. However, reliable dating of skeletonized remains poses a scientific challenge since human remains decomposition involves a set of complex and highly variable processes. Many of the difficulties in determining post mortem interval and/or the permanence of a body in a specific environment relates with the lack of systematic observations and research in human body decomposition modalities in different environments. In March 2006, in order to solve a problem of misidentification, a team of the South Branch of Portuguese National Institute of Legal Medicine carried out the exhumation of 25 identified individuals buried for almost five years in the same cemetery plot. Even though all individuals shared similar post mortem intervals, they presented different stages of decomposition. In order to analyze the post mortem factors associated with the different stages of decomposition displayed by the 25 exhumed individuals, the stages of decomposition were scored. Information regarding age at death and sex of the individuals were gathered and recorded as well as data in the cause of death and grave and coffin characteristics. Although the observed distinct decay stages may be explained by the burial conditions, namely by the micro taphonomic environments, individual endogenous factors also play an important role on differential decomposition as witnessed by the present case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Zhang, Xiao; Glennie, Craig L; Bucheli, Sibyl R; Lindgren, Natalie K; Lynne, Aaron M
2014-08-01
Decomposition can be a highly variable process with stages that are difficult to quantify. Using high accuracy terrestrial laser scanning a repeated three-dimensional (3D) documentation of volumetric changes of a human body during early decomposition is recorded. To determine temporal volumetric variations as well as 3D distribution of the changed locations in the body over time, this paper introduces the use of multiple degenerated cylinder models to provide a reasonable approximation of body parts against which 3D change can be measured and visualized. An iterative closest point algorithm is used for 3D registration, and a method for determining volumetric change is presented. Comparison of the laser scanning estimates of volumetric change shows good agreement with repeated in-situ measurements of abdomen and limb circumference that were taken diurnally. The 3D visualizations of volumetric changes demonstrate that bloat is a process with a beginning, middle, and end rather than a state of presence or absence. Additionally, the 3D visualizations show conclusively that cadaver bloat is not isolated to the abdominal cavity, but also occurs in the limbs. Detailed quantification of the bloat stage of decay has the potential to alter how the beginning and end of bloat are determined by researchers and can provide further insight into the effects of the ecosystem on decomposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Comparison of decomposition rates between autopsied and non-autopsied human remains.
Bates, Lennon N; Wescott, Daniel J
2016-04-01
Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas and that penetrating trauma may not be a significant factor in the overall rate of decomposition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao
2018-02-01
An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.
Forensically significant scavenging guilds in the southwest of Western Australia.
O'Brien, R Christopher; Forbes, Shari L; Meyer, Jan; Dadour, Ian
2010-05-20
Estimation of time since death is an important factor in forensic investigations and the state of decomposition of a body is a prime basis for such estimations. The rate of decomposition is, however, affected by many environmental factors such as temperature, rainfall, and solar radiation as well as by indoor or outdoor location, covering and the type of surface the body is resting upon. Scavenging has the potential for major impact upon the rate of decomposition of a body, but there is little direct research upon its effect. The information that is available relates almost exclusively to North American and European contexts. The Australian faunal assemblage is unique in that it includes no native large predators or large detrivorous avians. This research investigates the animals that scavenge carcasses in natural outdoor settings in southern Western Australia and the factors which can affect each scavenger's activity. The research was conducted at four locations around Perth, Western Australia with different environmental conditions. Pig carcasses, acting as models for the human body, were positioned in an outdoor environment with no protection from scavengers or other environmental conditions. Twenty-four hour continuous time-lapse video capture was used to observe the pattern of visits of all animals to the carcasses. The time of day, length of feeding, material fed upon, area of feeding, and any movement of the carcass were recorded for each feeding event. Some species were observed to scavenge almost continually throughout the day and night. Insectivores visited the carcasses mostly during bloat and putrefaction; omnivores fed during all stages of decomposition and scavenging by carnivores, rare at any time, was most likely to occur during the early stages of decomposition. Avian species, which were the most prolific visitors to the carcasses in all locations, like reptiles, fed only during daylight hours. Only mammals and amphibians, which were seldom seen during diurnal hours, were nocturnal feeders. The combined effects of the whole guild of scavengers significantly accelerated the later stages of decomposition, especially in the cooler months of the year when natural decomposition was slowest.
Camerlingo, Carlo; Zenone, Flora; Perna, Giuseppe; Capozzi, Vito; Cirillo, Nicola; Gaeta, Giovanni Maria; Lepore, Maria
2008-06-01
A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra of blood serum samples from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. Spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has been applied to the recorded Raman spectra in order to overcome problems related to low-level signals and the presence of noise and background components due to light scattering and fluorescence. This numerical data treatment can automatically extract quantitative information from the Raman spectra and makes more reliable the data comparison. Even if an exhaustive investigation has not been done in this work, the feasibility of the follow-up monitoring of pemphigus vulgaris pathology has been clearly proved with useful implications for the clinical applications.
Camerlingo, Carlo; Zenone, Flora; Perna, Giuseppe; Capozzi, Vito; Cirillo, Nicola; Gaeta, Giovanni Maria; Lepore, Maria
2008-01-01
A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra of blood serum samples from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. Spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has been applied to the recorded Raman spectra in order to overcome problems related to low-level signals and the presence of noise and background components due to light scattering and fluorescence. This numerical data treatment can automatically extract quantitative information from the Raman spectra and makes more reliable the data comparison. Even if an exhaustive investigation has not been done in this work, the feasibility of the follow-up monitoring of pemphigus vulgaris pathology has been clearly proved with useful implications for the clinical applications. PMID:27879899
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors
2013-02-01
with the bias node ( gray ) denoted as ww and the weights associated with the remaining first layer nodes (black) denoted as W. In forming the overall...Implementation of RBF network on GPU Platform 3.5.1 The Cholesky decomposition algorithm We need to invert the matrix multiplication GTG to
Kinetics of non-isothermal decomposition of cinnamic acid
NASA Astrophysics Data System (ADS)
Zhao, Ming-rui; Qi, Zhen-li; Chen, Fei-xiong; Yue, Xia-xin
2014-07-01
The thermal stability and kinetics of decomposition of cinnamic acid were investigated by thermogravimetry and differential scanning calorimetry at four heating rates. The activation energies of this process were calculated from analysis of TG curves by methods of Flynn-Wall-Ozawa, Doyle, Distributed Activation Energy Model, Šatava-Šesták and Kissinger, respectively. There are only one stage of thermal decomposition process in TG and two endothermic peaks in DSC. For this decomposition process of cinnamic acid, E and log A[s-1] were determined to be 81.74 kJ mol-1 and 8.67, respectively. The mechanism was Mampel Power law (the reaction order, n = 1), with integral form G(α) = α (α = 0.1-0.9). Moreover, thermodynamic properties of Δ H ≠, Δ S ≠, Δ G ≠ were 77.96 kJ mol-1, -90.71 J mol-1 K-1, 119.41 kJ mol-1.
NASA Astrophysics Data System (ADS)
Chen, Maosi
Solar radiation impacts many aspects of the Earth's atmosphere and biosphere. The total solar radiation impacts the atmospheric temperature profile and the Earth's surface radiative energy budget. The solar visible (VIS) radiation is the energy source of photosynthesis. The solar ultraviolet (UV) radiation impacts plant's physiology, microbial activities, and human and animal health. Recent studies found that solar UV significantly shifts the mass loss and nitrogen patterns of plant litter decomposition in semi-arid and arid ecosystems. The potential mechanisms include the production of labile materials from direct and indirect photolysis of complex organic matters, the facilitation of microbial decomposition with more labile materials, and the UV inhibition of microbes' population. However, the mechanisms behind UV decomposition and its ecological impacts are still uncertain. Accurate and reliable ground solar radiation measurements help us better retrieve the atmosphere composition, validate satellite radiation products, and simulate ecosystem processes. Incorporating the UV decomposition into the DayCent biogeochemical model helps to better understand long-term ecological impacts. Improving the accuracy of UV irradiance data is the goal of the first part of this research and examining the importance of UV radiation in the biogeochemical model DayCent is the goal of the second part of the work. Thus, although the dissertation is separated into two parts, accurate UV irradiance measurement links them in what follows. In part one of this work the accuracy and reliability of the current operational calibration method for the (UV-) Multi-Filter Rotating Shadowband Radiometer (MFRSR), which is used by the U.S. Department of Agriculture UV-B Monitoring and Research Program (UVMRP), is improved. The UVMRP has monitored solar radiation in the 14 narrowband UV and VIS spectral channels at 37 sites across U.S. since 1992. The improvements in the quality of the data result from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I also explored the ecological impacts of UV decomposition with the optimized DayCent-UV model. The DayCent-UV model showed significant better performance compared to models without UV decomposition in simulating the observed linear carbon loss pattern and the persistent net nitrogen mineralization in the 10-year LIDET experiment at the three sites. The DayCent-UV equilibrium model runs showed that UV decomposition increased aboveground and belowground plant production, surface net nitrogen mineralization, and surface litter nitrogen pool, while decreased surface litter carbon, soil net nitrogen mineralization and mineral soil carbon and nitrogen. In addition, UV decomposition showed minimal impacts (i.e. less than 1% change) on trace gases emission and biotic decomposition rates. Overall, my dissertation provided a comprehensive solution to improve the calibration accuracy and reliability of MFRSR and therefore the quality of radiation products. My dissertation also improved the understanding of UV decomposition and its long-term ecological impacts.
Fakoorziba, M R; Assareh, M; Keshavarzi, D; Soltani, A; Moemenbellah-Fard, M D; Zarenezhad, M
2017-01-01
Medico legal forensic entomology is the science and study of cadaveric arthropods related to criminal investigations. The study of beetles is particularly important in forensic cases. This can be important in determining the time of death and also obtain qualitative information about the location of the crime. The aim of this study was to introduce the Saprinus planiusculus on a rat carrion as a beetle species of forensic importance in Khuzestan province. This study was carried out using a laboratory bred rat (Wistar rat) as a model for human decomposition. The rat was killed by contusion and placed in a location adjacent to the Karun River. Observations and collections of beetles were made daily during May to July 2015. Decomposition time for rat carrion lasted 38 days and S. planiusculus was seen in the fresh to post decay stages of body decomposition and the largest number of this species caught in the decay stage. The species of beetle found in this case could be used in forensic investigations, particularly during the warm season in the future.
Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko
2015-01-01
We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605
Non-intrusive reduced order modeling of nonlinear problems using neural networks
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Ubbiali, S.
2018-06-01
We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.
Bacterial communities in the fruit bodies of ground basidiomycetes
NASA Astrophysics Data System (ADS)
Zagryadskaya, Yu. A.; Lysak, L. V.; Chernov, I. Yu.
2015-06-01
Fruit bodies of basidiomycetes at different stages of decomposition serve as specific habitats in forest biocenoses for bacteria and differ significantly with respect to the total bacterial population and abundance of particular bacterial genera. A significant increase in the total bacterial population estimated by the direct microscopic method with acridine orange staining and in the population of saprotrophic bacteria (inoculation of glucose peptone yeast agar) in fruit bodies of basidiomycetes Armillaria mellea and Coprinus comatus was recorded at the final stage of their decomposition in comparison with the initial stage. Gramnegative bacteria predominated in the tissues of fruit bodies at all the stages of decomposition and were represented at the final stage by the Aeromonas, Vibrio, and Pseudomonas genera (for fruit bodies of A. mellea) the Pseudomonas genus (for fruit bodies of C. comatus). The potential influence of bacterial communities in the fruit bodies of soil basidiomycetes on the formation of bacterial communities in the upper soil horizons in forest biocenoses is discussed. The loci connected with the development and decomposition of fruit bodies of basidiomycetes on the soil surface are promising for targeted search of Gram-negative bacteria, the important objects of biotechnology.
Multi-stage complex contagions.
Melnik, Sergey; Ward, Jonathan A; Gleeson, James P; Porter, Mason A
2013-03-01
The spread of ideas across a social network can be studied using complex contagion models, in which agents are activated by contact with multiple activated neighbors. The investigation of complex contagions can provide crucial insights into social influence and behavior-adoption cascades on networks. In this paper, we introduce a model of a multi-stage complex contagion on networks. Agents at different stages-which could, for example, represent differing levels of support for a social movement or differing levels of commitment to a certain product or idea-exert different amounts of influence on their neighbors. We demonstrate that the presence of even one additional stage introduces novel dynamical behavior, including interplay between multiple cascades, which cannot occur in single-stage contagion models. We find that cascades-and hence collective action-can be driven not only by high-stage influencers but also by low-stage influencers.
Wang, H; Chen, D; Yuan, G; Ma, X; Dai, X
2013-02-01
In this work, the morphological characteristics of waste polyethylene (PE)/polypropylene (PP) plastics during their pyrolysis process were investigated, and based on their basic image changing patterns representative morphological signals describing the pyrolysis stages were obtained. PE and PP granules and films were used as typical plastics for testing, and influence of impurities was also investigated. During pyrolysis experiments, photographs of the testing samples were taken sequentially with a high-speed infrared camera, and the quantitative parameters that describe the morphological characteristics of these photographs were explored using the "Image Pro Plus (v6.3)" digital image processing software. The experimental results showed that plastics pyrolysis involved four stages: melting, two stages of decomposition which are characterized with bubble formation caused by volatile evaporating, and ash deposition; and each stage was characterized with its own phase changing behaviors and morphological features. Two stages of decomposition are the key step of pyrolysis since they took up half or more of the reaction time; melting step consumed another half of reaction time in experiments when raw materials were heated up from ambient temperatures; and coke-like deposition appeared as a result of decomposition completion. Two morphological signals defined from digital image processing, namely, pixel area of the interested reaction region and bubble ratio (BR) caused by volatile evaporating were found to change regularly with pyrolysis stages. In particular, for all experimental scenarios with plastics films and granules, the BR curves always exhibited a slowly drop as melting started and then a sharp increase followed by a deep decrease corresponding to the first stage of intense decomposition, afterwards a second increase - drop section corresponding to the second stage of decomposition appeared. As ash deposition happened, the BR dropped to zero or very low values. When impurities were involved, the shape of BR curves showed that intense decomposition started earlier but morphological characteristics remained the same. In addition, compared to parameters such as pressure, the BR reflects reaction stages better and its change with pyrolysis process of PE/PP plastics with or without impurities was more intrinsically process correlated; therefore it can be adopted as a signal for pyrolysis process characterization, as well as offering guide to process improvement and reactor design. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Li, Liangliang; Wang, Jiangfeng; Wang, Yu
2016-08-01
Analysis of the process of decomposition is essential in establishing the postmortem interval. However, despite the fact that insects are important players in body decomposition, their exact function within the decay process is still unclear. There is also limited knowledge as to how the decomposition process occurs in the absence of insects. In the present study, we compared the decomposition of a pig carcass in open air with that of one placed in a methyl methacrylate box to prevent insect contact. The pig carcass in the methyl methacrylate box was in the fresh stage for 1 day, the bloated stage from 2 d to 11 d, and underwent deflated decay from 12 d. In contrast, the pig carcass in open air went through the fresh, bloated, active decay and post-decay stages; and 22.3 h (0.93 d), 62.47 h (2.60 d), 123.63 h (5.15 d) and 246.5 h (10.27 d) following the start of the experiment respectively, prior to entering the skeletonization stage. A large amount of soft tissue were remained on the pig carcass in the methyl methacrylate box on 26 d, while only scattered bones remained on the pig carcass in open air. The results indicate that insects greatly accelerate the decomposition process. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
NASA Astrophysics Data System (ADS)
Nikonova, L. G.; Golovatskaya, E. A.; Terechshenko, N. N.
2018-03-01
The research presents quantitative estimates of the decomposition rate of plant residues at the initial stages of the decay of two plant species (Eriophorum vaginatum and Sphagnum fuscum) in a peat deposit of the oligotrophic bog in the southern taiga subzone of Western Siberia. We also studied a change in the content of total carbon and nitrogen in plant residues and the activity of microflora in the initial stages of decomposition. At the initial stage of the transformation process of peat-forming plants the losses of mass of Sph. fuscum is 2.5 times lower then E. vaginatum. The most active mass losses, as well as a decrease in the total carbon content, is observed after four months of the experiment. The most active carbon removal is characteristic for E. vaginatum. During the decomposition of plant residues, the nitrogen content decreases, and the most intense nitrogen losses were characteristic for Sph. fuscum. The microorganisms assimilating organic and mineral nitrogen are more active in August, the oligotrophic and cellulolytic microorganisms – in July.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
NASA Astrophysics Data System (ADS)
Afonso, J. C.; Zlotnik, S.; Diez, P.
2015-12-01
We present a flexible, general and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on multi-dimensional decomposition methods, which transform the original multi-dimensional discrete information into a dimensional-separated representation. This representation has the property of increasing the number of coefficients to be stored linearly with the number of dimensions (opposite to a full multi-dimensional cube requiring exponential storage depending on the number of dimensions). Thus, the amount of information to be stored in memory during a numerical simulation or geophysical inversion is drastically reduced. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g. preliminary runs vs full resolution runs). We illustrate the benefits, generality and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies.
NASA Astrophysics Data System (ADS)
Ford, Neville J.; Connolly, Joseph A.
2009-07-01
We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
Characterization of agricultural land using singular value decomposition
NASA Astrophysics Data System (ADS)
Herries, Graham M.; Danaher, Sean; Selige, Thomas
1995-11-01
A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.
Computer simulations of austenite decomposition of microalloyed 700 MPa steel during cooling
NASA Astrophysics Data System (ADS)
Pohjonen, Aarne; Paananen, Joni; Mourujärvi, Juho; Manninen, Timo; Larkiola, Jari; Porter, David
2018-05-01
We present computer simulations of austenite decomposition to ferrite and bainite during cooling. The phase transformation model is based on Johnson-Mehl-Avrami-Kolmogorov type equations. The model is parameterized by numerical fitting to continuous cooling data obtained with Gleeble thermo-mechanical simulator and it can be used for calculation of the transformation behavior occurring during cooling along any cooling path. The phase transformation model has been coupled with heat conduction simulations. The model includes separate parameters to account for the incubation stage and for the kinetics after the transformation has started. The incubation time is calculated with inversion of the CCT transformation start time. For heat conduction simulations we employed our own parallelized 2-dimensional finite difference code. In addition, the transformation model was also implemented as a subroutine in commercial finite-element software Abaqus which allows for the use of the model in various engineering applications.
Throop, Heather L; Archer, Steven R
2007-09-01
Encroachment of woody plants into grasslands, and subsequent brush management, are among the most prominent changes to occur in arid and semiarid systems over the past century. Despite the resulting widespread changes in landcover, substantial uncertainty about the biogeochemical impacts of woody proliferation and brush management exists. We explored the role of shrub encroachment and brush management on leaf litter decomposition in a semidesert grassland where velvet mesquite (Prosopis velutina) abundance has increased over the past 100 years. This change in physiognomy may affect decomposition directly, through altered litter quality or quantity, and indirectly through altered canopy structure. To assess the direct and indirect impacts of shrubs on decomposition, we quantified changes in mass, nitrogen, and carbon in litterbags deployed under mesquite canopies and in intercanopy zones. Litterbags contained foliage from mesquite and Lehmann lovegrass (Eragrostis lehmanniana), a widespread, nonnative grass in southern Arizona. To explore short- and long-term influences of brush management on the initial stages of decomposition, litterbags were deployed at sites where mesquite canopies were removed three weeks, 45 years, or 70 years prior to study initiation. Mesquite litter decomposed more rapidly than lovegrass, but negative indirect influences of mesquite canopies counteracted positive direct effects. Decomposition was positively correlated with soil infiltration into litterbags, which varied with microsite placement, and was lowest under canopies. Low under-canopy decomposition was ostensibly due to decreased soil movement associated with high under-canopy herbaceous biomass. Decomposition rates where canopies were removed three weeks prior to study initiation were comparable to those beneath intact canopies, suggesting that decomposition was driven by mesquite legacy effects on herbaceous cover-soil movement linkages. Decomposition rates where shrubs were removed 45 and 70 years prior to study initiation were comparable to intercanopy rates, suggesting that legacy effects persist less than 45 years. Accurate decomposition modeling has proved challenging in arid and semiarid systems but is critical to understanding biogeochemical responses to woody encroachment and brush management. Predicting brush-management effects on decomposition will require information on shrub-grass interactions and herbaceous biomass influences on soil movement at decadal timescales. Inclusion of microsite factors controlling soil accumulation on litter would improve the predictive capability of decomposition models.
Multi-stage complex contagions
NASA Astrophysics Data System (ADS)
Melnik, Sergey; Ward, Jonathan A.; Gleeson, James P.; Porter, Mason A.
2013-03-01
The spread of ideas across a social network can be studied using complex contagion models, in which agents are activated by contact with multiple activated neighbors. The investigation of complex contagions can provide crucial insights into social influence and behavior-adoption cascades on networks. In this paper, we introduce a model of a multi-stage complex contagion on networks. Agents at different stages—which could, for example, represent differing levels of support for a social movement or differing levels of commitment to a certain product or idea—exert different amounts of influence on their neighbors. We demonstrate that the presence of even one additional stage introduces novel dynamical behavior, including interplay between multiple cascades, which cannot occur in single-stage contagion models. We find that cascades—and hence collective action—can be driven not only by high-stage influencers but also by low-stage influencers.
Caballero, Ubaldo; León-Cortés, Jorge L
2014-12-01
Over a 31-day period, the decomposition process, beetle diversity and succession on clothed pig (Sus scrofa L.) carcasses were studied in open (agricultural land) and shaded habitat (secondary forest) in Southern Mexico. The decomposition process was categorised into five stages: fresh, bloated, active decay, advanced decay and remains. Except for the bloated stage, the elapsed time for each decomposition stage was similar between open and shaded habitats, all carcasses reached an advanced decay stage in seven days, and the fifth stage (remains) was not recorded in any carcass during the time of this study. A total of 6344 beetles, belonging to 130 species and 21 families, were collected during the entire decomposition process, and abundances increased from fresh to advanced decay stages. Staphylinidae, Scarabaeidae and Histeridae were taxonomically and numerically dominant, accounting for 61% of the species richness and 87% of the total abundance. Similar numbers of species (87 and 88 species for open and shaded habitats, respectively), levels of diversity and proportions (open 49%; shaded 48%) of exclusive species were recorded at each habitat. There were significantly distinct beetle communities between habitats and for each stage of decomposition. An indicator species analysis ("IndVal") identified six species associated to open habitats, 10 species to shaded habitats and eight species to advanced decay stages. In addition, 23 beetle species are cited for the first time in the forensic literature. These results showed that open and shaded habitats both provide suitable habitat conditions for the carrion beetle diversity with significant differences in community structure and identity of the species associated to each habitat. This research provides the first empirical evidence of beetle ecological succession and diversity on carrion in Mexican agro-pastoral landscapes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An ab initio molecular dynamics study of thermal decomposition of 3,6-di(azido)-1,2,4,5-tetrazine.
Wu, Qiong; Zhu, Weihua; Xiao, Heming
2014-10-21
Ab initio molecular dynamics simulations were performed to study the thermal decomposition of isolated and crystal 3,6-di(azido)-1,2,4,5-tetrazine (DiAT). During unimolecular decomposition, the three different initiation mechanisms were observed to be N-N2 cleavage, ring opening, and isomerization, respectively. The preferential initial decomposition step is the homolysis of the N-N2 bond in the azido group. The release mechanisms of nitrogen gas are found to be very different in the early and later decomposition stages of crystal DiAT. In the early decomposition, DiAT decomposes very fast and drastically without forming any stable long-chains or heterocyclic clusters, and most of the nitrogen gases are released through rapid rupture of nitrogen-nitrogen and carbon-nitrogen bonds. But in the later decomposition stage, the release of nitrogen gas is inhibited due to low mobility, long distance from each other, and strong carbon-nitrogen bonds. To overcome the obstacles, the nitrogen gases are released through slow formation and disintegration of polycyclic networks. Our simulations suggest a new decomposition mechanism for the organic polyazido initial explosive at the atomistic level.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Longuski, James M.; Mcronald, Angus D.
1988-01-01
In previous work the problem of injecting the Galileo and Ulysses spacecraft from low earth orbit into their respective interplanetary trajectories has been discussed for the single stage (Centaur) vehicle. The central issue, in the event of spherically distributed injection errors, is what happens to the vehicle? The difficulties addressed in this paper involve the multi-stage problem since both Galileo and Ulysses will be utilizing the two-stage IUS system. Ulysses will also include a third stage: the PAM-S. The solution is expressed in terms of probabilities for total percentage of escape, orbit decay and reentry trajectories. Analytic solutions are found for Hill's Equations of Relative Motion (more recently called Clohessy-Wiltshire Equations) for multi-stage injections. These solutions are interpreted geometrically on the injection sphere. The analytic-geometric models compare well with numerical solutions, provide insight into the behavior of trajectories mapped on the injection sphere and simplify the numerical two-dimensional search for trajectory families.
Thermal Stability of Fluorinated Polydienes Synthesized by Addition of Difluorocarbene
2012-01-01
polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads...polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads to... dehalogenation . The pyrolysis leads to graphite-like residues, whereas their polydiene precursors decompose completely under the same conditions. The
An investigation on the modelling of kinetics of thermal decomposition of hazardous mercury wastes.
Busto, Yailen; M G Tack, Filip; Peralta, Luis M; Cabrera, Xiomara; Arteaga-Pérez, Luis E
2013-09-15
The kinetics of mercury removal from solid wastes generated by chlor-alkali plants were studied. The reaction order and model-free method with an isoconversional approach were used to estimate the kinetic parameters and reaction mechanism that apply to the thermal decomposition of hazardous mercury wastes. As a first approach to the understanding of thermal decomposition for this type of systems (poly-disperse and multi-component), a novel scheme of six reactions was proposed to represent the behaviour of mercury compounds in the solid matrix during the treatment. An integration-optimization algorithm was used in the screening of nine mechanistic models to develop kinetic expressions that best describe the process. The kinetic parameters were calculated by fitting each of these models to the experimental data. It was demonstrated that the D₁-diffusion mechanism appeared to govern the process at 250°C and high residence times, whereas at 450°C a combination of the diffusion mechanism (D₁) and the third order reaction mechanism (F3) fitted the kinetics of the conversions. The developed models can be applied in engineering calculations to dimension the installations and determine the optimal conditions to treat a mercury containing sludge. Copyright © 2013 Elsevier B.V. All rights reserved.
The biology and ecology of Necrodes littoralis, a species of forensic interest in Europe.
Charabidze, Damien; Vincent, Benoît; Pasquerault, Thierry; Hedouin, Valéry
2016-01-01
Necrodes littoralis (Linnaeus, 1758) (Coleoptera: Silphidae), also known as the "shore sexton beetle," is a common silphid beetle that visits and breeds on large vertebrate cadavers. This study describes, for the first time, the involvement of N. littoralis on human corpses based on a large dataset of 154 French forensic cases. Various parameters regarding corpse location, decomposition stages, and entomofauna were extracted from each file. Compared to all of the forensic entomology cases analyzed between 1990 and 2013 (1028), N. littoralis was observed, on average, in one case out of eight; most of these cases occurred during spring and summer (73.5%). More than 90% of the cases were located outdoors, especially in woodlands, bushes, and fields. The decomposition stage of the corpse varied among cases, with more than 50% in the advanced decomposition stage, 36% in the early decomposition stage, and less than 10% in the fresh, mummified, or skeletonized stages. Regarding other necrophagous species sampled with N. littoralis, Calliphorid flies were found in 94% of the cases and Fanniidae/Muscidae in 65% of the cases. Chrysomya albiceps, a heliophilic species mostly located in the Mediterranean area, was present in 34% of the cases (only 20% in the whole dataset). The most common coleopteran species were Necrobia spp. (Coleoptera: Cleridae) and Creophilus maxillosus (Coleoptera: Staphylinidae); these beetles were observed in 27% of the cases. The over-representation of these species is likely due to similar requirements regarding the climate and decomposition stage. As N. littoralis is frequently observed and tends to become more common, we conclude that the developmental data for this species would be a precious tool for forensic entomologists in Europe.
Toward a More Robust Pruning Procedure for MLP Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.
Proper Orthogonal Decomposition on Experimental Multi-phase Flow in a Pipe
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; Tutkun, Murat; Cal, Raúl Bayoán
2016-11-01
Multi-phase flow in a 10 cm diameter pipe is analyzed using proper orthogonal decomposition. The data were obtained using X-ray computed tomography in the Well Flow Loop at the Institute for Energy Technology in Kjeller, Norway. The system consists of two sources and two detectors; one camera records the vertical beams and one camera records the horizontal beams. The X-ray system allows measurement of phase holdup, cross-sectional phase distributions and gas-liquid interface characteristics within the pipe. The mathematical framework in the context of multi-phase flows is developed. Phase fractions of a two-phase (gas-liquid) flow are analyzed and a reduced order description of the flow is generated. Experimental data deepens the complexity of the analysis with limited known quantities for reconstruction. Comparison between the reconstructed fields and the full data set allows observation of the important features. The mathematical description obtained from the decomposition will deepen the understanding of multi-phase flow characteristics and is applicable to fluidized beds, hydroelectric power and nuclear processes to name a few.
Multi-scale Methods in Quantum Field Theory
NASA Astrophysics Data System (ADS)
Polyzou, W. N.; Michlin, Tracie; Bulut, Fatih
2018-05-01
Daubechies wavelets are used to make an exact multi-scale decomposition of quantum fields. For reactions that involve a finite energy that take place in a finite volume, the number of relevant quantum mechanical degrees of freedom is finite. The wavelet decomposition has natural resolution and volume truncations that can be used to isolate the relevant degrees of freedom. The application of flow equation methods to construct effective theories that decouple coarse and fine scale degrees of freedom is examined.
Multi-water-bag models of ion temperature gradient instability in cylindrical geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulette, David; Besse, Nicolas
2013-05-15
Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between themore » global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.« less
NASA Astrophysics Data System (ADS)
Han, Zhen; Cui, Baoshan; Zhang, Yongtao
2015-09-01
Rhizomes are essential organs for growth and expansion of Phragmites australis. They function as an important source of organic matter and as a nutrient source, especially in the artificial land-water transitional zones (ALWTZs) of shallow lakes. In this study, decomposition experiments on 1- to 6-year-old P. australis rhizomes were conducted in the ALWTZ of Lake Baiyangdian to evaluate the contribution of the rhizomes to organic matter accumulation and nutrient release. Mass loss and changes in nutrient content were measured after 3, 7, 15, 30, 60, 90, 120, and 180 days. The decomposition process was modeled with a composite exponential model. The Pearson correlation analysis was used to analyze the relationships between mass loss and litter quality factors. A multiple stepwise regression model was utilized to determine the dominant factors that affect mass loss. Results showed that the decomposition rates in water were significantly higher than those in soil for 1- to 6-year-old rhizomes. However, the sequence of decomposition rates was identical in both water and soil. Significant relationships between mass loss and litter quality factors were observed at a later stage, and P-related factors proved to have a more significant impact than N-related factors on mass loss. According to multiple stepwise models, the C/P ratio was found to be the dominant factor affecting the mass loss in water, and the C/N and C/P ratios were the main factors affecting the mass loss in soil. The combined effects of harvesting, ditch broadening, and control of water depth should be considered for lake administrators.
Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
2017-07-01
In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.
Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet
NASA Astrophysics Data System (ADS)
Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark
2017-11-01
Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.
Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M
2016-02-19
This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.
Tsyshevsky, Roman; Sharia, Onise; Kuklja, Maija
2016-02-19
Our review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our ownmore » first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Lastly, our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.« less
Evidence for morphological composition in compound words using MEG.
Brooks, Teon L; Cid de Garcia, Daniela
2015-01-01
Psycholinguistic and electrophysiological studies of lexical processing show convergent evidence for morpheme-based lexical access for morphologically complex words that involves early decomposition into their constituent morphemes followed by some combinatorial operation. Considering that both semantically transparent (e.g., sailboat) and semantically opaque (e.g., bootleg) compounds undergo morphological decomposition during the earlier stages of lexical processing, subsequent combinatorial operations should account for the difference in the contribution of the constituent morphemes to the meaning of these different word types. In this study we use magnetoencephalography (MEG) to pinpoint the neural bases of this combinatorial stage in English compound word recognition. MEG data were acquired while participants performed a word naming task in which three word types, transparent compounds (e.g., roadside), opaque compounds (e.g., butterfly), and morphologically simple words (e.g., brothel) were contrasted in a partial-repetition priming paradigm where the word of interest was primed by one of its constituent morphemes. Analysis of onset latency revealed shorter latencies to name compound words than simplex words when primed, further supporting a stage of morphological decomposition in lexical access. An analysis of the associated MEG activity uncovered a region of interest implicated in morphological composition, the Left Anterior Temporal Lobe (LATL). Only transparent compounds showed increased activity in this area from 250 to 470 ms. Previous studies using sentences and phrases have highlighted the role of LATL in performing computations for basic combinatorial operations. Results are in tune with decomposition models for morpheme accessibility early in processing and suggest that semantics play a role in combining the meanings of morphemes when their composition is transparent to the overall word meaning.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
NASA Astrophysics Data System (ADS)
Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong
2017-10-01
The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, S A; Trunov, V I; Pestryakov, Efim V
2013-05-31
We have developed a technique for investigating the evolution of spatial inhomogeneities in high-power laser systems based on multi-stage parametric amplification. A linearised model of the inhomogeneity development is first devised for parametric amplification with the small-scale self-focusing taken into account. It is shown that the application of this model gives the results consistent (with high accuracy and in a wide range of inhomogeneity parameters) with the calculation without approximations. Using the linearised model, we have analysed the development of spatial inhomogeneities in a petawatt laser system based on multi-stage parametric amplification, developed at the Institute of Laser Physics, Siberianmore » Branch of the Russian Academy of Sciences (ILP SB RAS). (control of laser radiation parameters)« less
Early stage litter decomposition across biomes
Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et
2018-01-01
Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...
Billard, L; Dayananda, P W A
2014-03-01
Stochastic population processes have received a lot of attention over the years. One approach focuses on compartmental modeling. Billard and Dayananda (2012) developed one such multi-stage model for epidemic processes in which the possibility that individuals can die at any stage from non-disease related causes was also included. This extra feature is of particular interest to the insurance and health-care industries among others especially when the epidemic is HIV/AIDS. Rather than working with numbers of individuals in each stage, they obtained distributional results dealing with the waiting time any one individual spent in each stage given the initial stage. In this work, the impact of the HIV/AIDS epidemic on several functions relevant to these industries (such as adjustments to premiums) is investigated. Theoretical results are derived, followed by a numerical study. Copyright © 2014 Elsevier Inc. All rights reserved.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Studies on thermal decomposition behaviors of polypropylene using molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Huang, Jinbao; He, Chao; Tong, Hong; Pan, Guiying
2017-11-01
Polypropylene (PP) is one of the main components of waste plastics. In order to understand the mechanism of PP thermal decomposition, the pyrolysis behaviour of PP has been simulated from 300 to 1000 K in periodic boundary conditions by molecular dynamic method, based on AMBER force field. The simulation results show that the pyrolysis process of PP can mostly be divided into three stages: low temperature pyrolysis stage, intermediate temperature stage and high temperature pyrolysis stage. PP pyrolysis is typical of random main-chain scission, and the possible formation mechanism of major pyrolysis products was analyzed.
Temperature Responses of Soil Organic Matter Components With Varying Recalcitrance
NASA Astrophysics Data System (ADS)
Simpson, M. J.; Feng, X.
2007-12-01
The response of soil organic matter (SOM) to global warming remains unclear partly due to the chemical heterogeneity of SOM composition. In this study, the decomposition of SOM from two grassland soils was investigated in a one-year laboratory incubation at six different temperatures. SOM was separated into solvent- extractable compounds, suberin- and cutin-derived compounds, and lignin monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components had distinct chemical structures and recalcitrance, and their decomposition was fitted by a two-pool exponential decay model. The stability of SOM components was assessed using geochemical parameters and kinetic parameters derived from model fitting. Lignin monomers exhibited much lower decay rates than solvent-extractable compounds and a relatively low percentage of lignin monomers partitioned into the labile SOM pool, which confirmed the generally accepted recalcitrance of lignin compounds. Suberin- and cutin-derived compounds had a poor fitting for the exponential decay model, and their recalcitrance was shown by the geochemical degradation parameter which stabilized during the incubation. The aliphatic components of suberin degraded faster than cutin-derived compounds, suggesting that cutin-derived compounds in the soil may be at a higher stage of degradation than suberin- derived compounds. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of the recalcitrant lignin monomers had much higher Q10 values than soil respiration or the solvent-extractable compounds decomposition. Our study shows that the decomposition of recalcitrant SOM is highly sensitive to temperature, more so than bulk soil mineralization. This observation suggests a potential acceleration in the degradation of the recalcitrant SOM pool with global warming.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Dynamics of Potassium Release and Adsorption on Rice Straw Residue
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K+. This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K+ release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K+ from the ambient environment, which was subject to decomposition periods and extra K+ concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K+ ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g−1, and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K+ indirectly during the initial decomposition period. These crop residues could also directly adsorb K+ via physical and chemical adsorption in the later period, allowing part of this K+ to be absorbed by plants for the next growing season. PMID:24587364
Dynamics of potassium release and adsorption on rice straw residue.
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K(+). This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K(+) release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K(+) from the ambient environment, which was subject to decomposition periods and extra K(+) concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K(+) ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g(-1), and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K(+) indirectly during the initial decomposition period. These crop residues could also directly adsorb K(+) via physical and chemical adsorption in the later period, allowing part of this K(+) to be absorbed by plants for the next growing season.
Zheng, Dalong; Ma, Liping; Wang, Rongmou; Yang, Jie; Dai, Quxiu
2018-02-01
Phosphogypsum is a solid industry by-product generated when sulphuric acid is used to process phosphate ore into fertiliser. Phosphogypsum stacks without pretreatment are often piled on the land surface or dumped in the sea, causing significant environmental damage. This study examined the reaction characteristics of phosphogypsum, when decomposed in a multi-atmosphere fluidised bed. Phosphogypsum was first dried, sieved and mixed proportionally with lignite at the mass ratio of 10:1, it was then immersed in 0.8 [Formula: see text] with a solid-liquid ratio of 8:25. The study included a two-step cycle of multi-atmosphere control. First, a reducing atmosphere was provided to allow phosphogypsum decomposition through partial lignite combustion. After the reduction stage reaction was completed, the reducing atmosphere was changed into an air-support oxidising atmosphere at the constant temperature. Each atmosphere cycle had a conversion time of 30 min to ensure a sufficient reaction. The decomposing properties of phosphogypsum were obtained in different atmosphere cycles, at different reaction temperatures, different heating rates and different fluidised gas velocities, using experimental results combined with a theoretical analysis using FactSage 7.0 Reaction module. The study revealed that the optimum reaction condition was to circulate the atmosphere twice at a temperature of 1100 °C. The heating rate above 800 °C was 5 [Formula: see text], and the fluidised gas velocity was 0.40 [Formula: see text]. The procedure proposed in this article can serve as a phosphogypsum decomposition solution, and can support the future management of this by-product, resulting in more sustainable production.
Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick
2018-01-01
Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.
Wavelet data analysis of micro-Raman spectra for follow-up monitoring in oral pathologies
NASA Astrophysics Data System (ADS)
Camerlingo, C.; Zenone, F.; Perna, G.; Capozzi, V.; Cirillo, N.; Gaeta, G. M.; Lepore, M.
2008-02-01
A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra from human biological samples. In particular, measurements have been performed on some samples of oral tissue and blood serum from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. The disease is characterized histologically by intradermal blisters and immunopathologically by the finding of tissue bound and circulating immunoglobulin G (IgG) antibody directed against the cell surface of keratinocytes. More than 150 spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has been applied to the recorded Raman spectra in order to overcome related to low-level signals and the presence of noise and background components due to light scattering and fluorescence. The results indicate that appropriate data processing can contribute to enlarge the medical applications of micro-Raman spectroscopy.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Simulated dynamic response of a multi-stage compressor with variable molecular weight flow medium
NASA Technical Reports Server (NTRS)
Babcock, Dale A.
1995-01-01
A mathematical model of a multi-stage compressor with variable molecular weight flow medium is derived. The modeled system consists of a five stage, six cylinder, double acting, piston type compressor. Each stage is followed by a water cooled heat exchanger which serves to transfer the heat of compression from the gas. A high molecular weight gas (CFC-12) mixed with air in varying proportions is introduced to the suction of the compressor. Condensation of the heavy gas may occur in the upper stage heat exchangers. The state equations for the system are integrated using the Advanced Continuous Simulation Language (ACSL) for determining the system's dynamic and steady state characteristics under varying operating conditions.
NASA Astrophysics Data System (ADS)
Smith, Nathan; Provatas, Nikolas
Recent experimental work has shown that gold nanoparticles can precipitate from an aqueous solution through a non-classical, multi-step nucleation process. This multi-step process begins with spinodal decomposition into solute-rich and solute-poor liquid domains followed by nucleation from within the solute-rich domains. We present a binary phase-field crystal theory that shows the same phenomology and examine various cross-over regimes in the growth and coarsening of liquid and solid domains. We'd like to the thank Canada Research Chairs (CRC) program for funding this work.
Pomerantsev, Alexey L; Kutsenova, Alla V; Rodionova, Oxana Ye
2017-02-01
A novel non-linear regression method for modeling non-isothermal thermogravimetric data is proposed. Experiments for several heating rates are analyzed simultaneously. The method is applicable to complex multi-stage processes when the number of stages is unknown. Prior knowledge of the type of kinetics is not required. The main idea is a consequent estimation of parameters when the overall model is successively changed from one level of modeling to another. At the first level, the Avrami-Erofeev functions are used. At the second level, the Sestak-Berggren functions are employed with the goal to broaden the overall model. The method is tested using both simulated and real-world data. A comparison of the proposed method with a recently published 'model-free' deconvolution method is presented.
a Multi Objective Model for Optimization of a Green Supply Chain Network
NASA Astrophysics Data System (ADS)
Paksoy, Turan; Özceylan, Eren; Weber, Gerhard-Wilhelm
2010-06-01
This study develops a model of a closed-loop supply chain (CLSC) network which starts with the suppliers and recycles with the decomposition centers. As a traditional network design, we consider minimizing the all transportation costs and the raw material purchasing costs. To pay attention for the green impacts, different transportation choices are presented between echelons according to their CO2 emissions. The plants can purchase different raw materials in respect of their recyclable ratios. The focuses of this paper are conducting the minimizing total CO2 emissions. Also we try to encourage the customers to use recyclable materials as an environmental performance viewpoint besides minimizing total costs. A multi objective linear programming model is developed via presenting a numerical example. We close the paper with recommendations for future researches.
Hierarchical competitions subserving multi-attribute choice
Hunt, Laurence T; Dolan, Raymond J; Behrens, Timothy EJ
2015-01-01
Valuation is a key tenet of decision neuroscience, where it is generally assumed that different attributes of competing options are assimilated into unitary values. Such values are central to current neural models of choice. By contrast, psychological studies emphasize complex interactions between choice and valuation. Principles of neuronal selection also suggest competitive inhibition may occur in early valuation stages, before option selection. Here, we show behavior in multi-attribute choice is best explained by a model involving competition at multiple levels of representation. This hierarchical model also explains neural signals in human brain regions previously linked to valuation, including striatum, parietal and prefrontal cortex, where activity represents competition within-attribute, competition between attributes, and option selection. This multi-layered inhibition framework challenges the assumption that option values are computed before choice. Instead our results indicate a canonical competition mechanism throughout all stages of a processing hierarchy, not simply at a final choice stage. PMID:25306549
Measures of Microbial Biomass for Soil Carbon Decomposition Models
NASA Astrophysics Data System (ADS)
Mayes, M. A.; Dabbs, J.; Steinweg, J. M.; Schadt, C. W.; Kluber, L. A.; Wang, G.; Jagadamma, S.
2014-12-01
Explicit parameterization of the decomposition of plant inputs and soil organic matter by microbes is becoming more widely accepted in models of various complexity, ranging from detailed process models to global-scale earth system models. While there are multiple ways to measure microbial biomass, chloroform fumigation-extraction (CFE) is commonly used to parameterize models.. However CFE is labor- and time-intensive, requires toxic chemicals, and it provides no specific information about the composition or function of the microbial community. We investigated correlations between measures of: CFE; DNA extraction yield; QPCR base-gene copy numbers for Bacteria, Fungi and Archaea; phospholipid fatty acid analysis; and direct cell counts to determine the potential for use as proxies for microbial biomass. As our ultimate goal is to develop a reliable, more informative, and faster methods to predict microbial biomass for use in models, we also examined basic soil physiochemical characteristics including texture, organic matter content, pH, etc. to identify multi-factor predictive correlations with one or more measures of the microbial community. Our work will have application to both microbial ecology studies and the next generation of process and earth system models.
Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing
2018-02-01
Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Fast multi-scale feature fusion for ECG heartbeat classification
NASA Astrophysics Data System (ADS)
Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian
2015-12-01
Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.
Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.
Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus
2009-02-01
While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.
Decomposition and arthropod succession in Whitehorse, Yukon Territory, Canada.
Bygarski, Katherine; LeBlanc, Helene N
2013-03-01
Forensic arthropod succession patterns are known to vary between regions. However, the northern habitats of the globe have been largely left unstudied. Three pig carcasses were studied outdoors in Whitehorse, Yukon Territory. Adult and immature insects were collected for identification and comparison. The dominant Diptera and Coleoptera species at all carcasses were Protophormia terraneovae (R-D) (Fam: Calliphoridae) and Thanatophilus lapponicus (Herbst) (Fam: Silphidae), respectively. Rate of decomposition, patterns of Diptera and Coleoptera succession, and species dominance were shown to differ from previous studies in temperate regions, particularly as P. terraenovae showed complete dominance among blowfly species. Rate of decomposition through the first four stages was generally slow, and the last stage of decomposition was not observed at any carcass due to time constraints. It is concluded that biogeoclimatic range has a significant effect on insect presence and rate of decomposition, making it an important factor to consider when calculating a postmortem interval. © 2012 American Academy of Forensic Sciences.
How quantitative measures unravel design principles in multi-stage phosphorylation cascades.
Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf
2008-09-07
We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
Li, Huiyan; Wei, Zishang; Huangfu, Chaohe; Chen, Xinwei; Yang, Dianlin
2017-01-01
In natural ecosystems, invasive plant litter is often mixed with that of native species, yet few studies have examined the decomposition dynamics of such mixtures, especially across different degrees of invasion. We conducted a 1-year litterbag experiment using leaf litters from the invasive species Flaveria bidentis (L.) and the dominant co-occurring native species, Setaria viridis (L.). Litters were allowed to decompose either separately or together at different ratios in a mothproof screen house. The mass loss of all litter mixtures was non-additive, and the direction and strength of effects varied with species ratio and decomposition stage. During the initial stages of decomposition, all mixtures had a neutral effect on the mass loss; however, at later stages of decomposition, mixtures containing more invasive litter had synergistic effects on mass loss. Importantly, an increase in F. bidentis litter with a lower C:N ratio in mixtures led to greater net release of N over time. These results highlight the importance of trait dissimilarity in determining the decomposition rates of litter mixtures and suggest that F. bidentis could further synchronize N release from litter as an invasion proceeds, potentially creating a positive feedback linked through invasion as the invader outcompetes the natives for nutrients. Our findings also demonstrate the importance of species composition as well as the identity of dominant species when considering how changes in plant community structure influence plant invasion.
Tilt to horizontal global solar irradiance conversion: application to PV systems data
NASA Astrophysics Data System (ADS)
Housmans, Caroline; Leloux, Jonathan; Bertrand, Cédric
2017-04-01
Many transposition models have been proposed in the literature to convert solar irradiance on the horizontal plane to that on a tilted plane requiring that at least two of the three solar components (i.e. global, direct and diffuse) are known. When only global irradiance measurements are available, the conversion from horizontal to tilted planes is still possible but in this case transposition models have to be coupled with decomposition models (i.e. models that predict the direct and diffuse components from the global one). Here, two different approaches have been considered to solve the reverse process, i.e. the conversion from tilted to horizontal: (i) one-sensor approach and (ii) multi-sensors approach. Because only one tilted plane is involved in the one-sensor approach, a decomposition model need to be coupled with a transposition model to solve the problem. By contrast, at least two tilted planes being considered in the multi-sensors approach, only a transposition model is required to perform the conversion. First, global solar irradiance measurements recorded on the roof of the Royal Meteorological Institute of Belgium's radiation tower in Uccle were used to evaluate the performance of both approaches. Four pyranometers (one mounted in the horizontal plane and three on inclined surfaces with different tilts and orientations) were involved in the validation exercise. Second, the inverse transposition was applied to tilted global solar irradiance values retrieved from the energy production registered at residential PV systems located in the vicinity of Belgian radiometric stations operated by RMI (for validation purposes).
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
Latent feature decompositions for integrative analysis of multi-platform genomic data
Gregory, Karl B.; Momin, Amin A.; Coombes, Kevin R.; Baladandayuthapani, Veerabhadran
2015-01-01
Increased availability of multi-platform genomics data on matched samples has sparked research efforts to discover how diverse molecular features interact both within and between platforms. In addition, simultaneous measurements of genetic and epigenetic characteristics illuminate the roles their complex relationships play in disease progression and outcomes. However, integrative methods for diverse genomics data are faced with the challenges of ultra-high dimensionality and the existence of complex interactions both within and between platforms. We propose a novel modeling framework for integrative analysis based on decompositions of the large number of platform-specific features into a smaller number of latent features. Subsequently we build a predictive model for clinical outcomes accounting for both within- and between-platform interactions based on Bayesian model averaging procedures. Principal components, partial least squares and non-negative matrix factorization as well as sparse counterparts of each are used to define the latent features, and the performance of these decompositions is compared both on real and simulated data. The latent feature interactions are shown to preserve interactions between the original features and not only aid prediction but also allow explicit selection of outcome-related features. The methods are motivated by and applied to, a glioblastoma multiforme dataset from The Cancer Genome Atlas to predict patient survival times integrating gene expression, microRNA, copy number and methylation data. For the glioblastoma data, we find a high concordance between our selected prognostic genes and genes with known associations with glioblastoma. In addition, our model discovers several relevant cross-platform interactions such as copy number variation associated gene dosing and epigenetic regulation through promoter methylation. On simulated data, we show that our proposed method successfully incorporates interactions within and between genomic platforms to aid accurate prediction and variable selection. Our methods perform best when principal components are used to define the latent features. PMID:26146492
Yang, Song; Du, Wenguang; Shi, Pengzheng; Shangguan, Ju; Liu, Shoujun; Zhou, Changhai; Chen, Peng; Zhang, Qian; Fan, Huiling
2016-01-01
Nickel laterites cannot be effectively used in physical methods because of their poor crystallinity and fine grain size. Na2SO4 is the most efficient additive for grade enrichment and Ni recovery. However, how Na2SO4 affects the selective reduction of laterite ores has not been clearly investigated. This study investigated the decomposition of laterite with and without the addition of Na2SO4 in an argon atmosphere using thermogravimetry coupled with mass spectrometry (TG-MS). Approximately 25 mg of samples with 20 wt% Na2SO4 was pyrolyzed under a 100 ml/min Ar flow at a heating rate of 10°C/min from room temperature to 1300°C. The kinetic study was based on derivative thermogravimetric (DTG) curves. The evolution of the pyrolysis gas composition was detected by mass spectrometry, and the decomposition products were analyzed by X-ray diffraction (XRD). The decomposition behavior of laterite with the addition of Na2SO4 was similar to that of pure laterite below 800°C during the first three stages. However, in the fourth stage, the dolomite decomposed at 897°C, which is approximately 200°C lower than the decomposition of pure laterite. In the last stage, the laterite decomposed and emitted SO2 in the presence of Na2SO4 with an activation energy of 91.37 kJ/mol. The decomposition of laterite with and without the addition of Na2SO4 can be described by one first-order reaction. Moreover, the use of Na2SO4 as the modification agent can reduce the activation energy of laterite decomposition; thus, the reaction rate can be accelerated, and the reaction temperature can be markedly reduced. PMID:27333072
McIntosh, Craig S; Dadour, Ian R; Voss, Sasha C
2017-05-01
The rate of decomposition and insect succession onto decomposing pig carcasses were investigated following burning of carcasses. Ten pig carcasses (40-45 kg) were exposed to insect activity during autumn (March-April) in Western Australia. Five replicates were burnt to a degree described by the Crow-Glassman Scale (CGS) level #2, while five carcasses were left unburnt as controls. Burning carcasses greatly accelerated decomposition in contrast to unburnt carcasses. Physical modifications following burning such as skin discolouration, splitting of abdominal tissue and leathery consolidation of skin eliminated evidence of bloat and altered microambient temperatures associated with carcasses throughout decomposition. Insect species identified on carcasses were consistent between treatment groups; however, a statistically significant difference in insect succession onto remains was evident between treatments (PERMANOVA F (1, 224) = 14.23, p < 0.01) during an 8-day period that corresponds with the wet stage of decomposition. Differences were noted in the arrival time of late colonisers (Coleoptera) and the development of colonising insects between treatment groups. Differences in the duration of decomposition stages and insect assemblages indicate that burning has an effect on both rate of decomposition and insect succession. The findings presented here provide baseline data for entomological casework involving burnt remains criminal investigations.
Substrate quality alters the microbial mineralization of added substrate and soil organic carbon
NASA Astrophysics Data System (ADS)
Jagadamma, S.; Mayes, M. A.; Steinweg, J. M.; Schaeffer, S. M.
2014-09-01
The rate and extent of decomposition of soil organic carbon (SOC) is dependent, among other factors, on substrate chemistry and microbial dynamics. Our objectives were to understand the influence of substrate chemistry on microbial decomposition of carbon (C), and to use model fitting to quantify differences in pool sizes and mineralization rates. We conducted an incubation experiment for 270 days using four uniformly labeled 14C substrates (glucose, starch, cinnamic acid and stearic acid) on four different soils (a temperate Mollisol, a tropical Ultisol, a sub-arctic Andisol, and an arctic Gelisol). The 14C labeling enabled us to separate CO2 respired from added substrates and from native SOC. Microbial gene copy numbers were quantified at days 4, 30 and 270 using quantitative polymerase chain reaction (qPCR). Substrate C respiration was always higher for glucose than other substrates. Soils with cinnamic and stearic acid lost more native SOC than glucose- and starch-amended soils. Cinnamic and stearic acid amendments also exhibited higher fungal gene copy numbers at the end of incubation compared to unamended soils. We found that 270 days were sufficient to model the decomposition of simple substrates (glucose and starch) with three pools, but were insufficient for more complex substrates (cinnamic and stearic acid) and native SOC. This study reveals that substrate quality exerts considerable control on the microbial decomposition of newly added and native SOC, and demonstrates the need for multi-year incubation experiments to constrain decomposition parameters for the most recalcitrant fractions of SOC and complex substrates.
Xu, Li-Ya; Yang, Wan-Qin; Li, Han; Ni, Xiang-Yin; He, Jie; Wu, Fu-Zhong
2014-11-01
Seasonal snow cover may change the characteristics of freezing, leaching and freeze-thaw cycles in the scenario of climate change, and then play important roles in the dynamics of water soluble and organic solvent soluble components during foliar litter decomposition in the alpine forest. Therefore, a field litterbag experiment was conducted in an alpine forest in western Sichuan, China. The foliar litterbags of typical tree species (birch, cypress, larch and fir) and shrub species (willow and azalea) were placed on the forest floor under different snow cover thickness (deep snow, medium snow, thin snow and no snow). The litterbags were sampled at snow formation stage, snow cover stage and snow melting stage in winter. The results showed that the content of water soluble components from six foliar litters decreased at snow formation stage and snow melting stage, but increased at snow cover stage as litter decomposition proceeded in the winter. Besides the content of organic solvent soluble components from azalea foliar litter increased at snow cover stage, the content of organic solvent soluble components from the other five foliar litters kept a continue decreasing tendency in the winter. Compared with the content of organic solvent soluble components, the content of water soluble components was affected more strongly by snow cover thickness, especially at snow formation stage and snow cover stage. Compared with the thicker snow covers, the thin snow cover promoted the decrease of water soluble component contents from willow and azalea foliar litter and restrain the decrease of water soluble component content from cypress foliar litter. Few changes in the content of water soluble components from birch, fir and larch foliar litter were observed under the different thicknesses of snow cover. The results suggested that the effects of snow cover on the contents of water soluble and organic solvent soluble components during litter decomposition would be controlled by litter quality.
Wood, Camila Timm; Schlindwein, Carolina Casco Duarte; Soares, Geraldo Luiz Gonçalves; Araujo, Paula Beatriz
2012-01-01
Abstract The goal of this study was to compare the feeding rates of Balloniscus sellowii on leaves of different decomposition stages according to their phenolic and flavonoid content. Leaves from the visually most abundant plants were offered to isopods collected from the same source site. Schinus terebinthifolius,the plant species consumed at the highest rate, was used to verify feeding rates at different decomposition stages. Green leaves were left to decompose for one, two, or three months, and then were offered to isopods. The total phenolic and flavonoid contents were determined for all decomposition stages. Consumption and egestion rates increased throughout decomposition, were highest for two-month-old leaves, and decreased again in the third month. The assimilation rate was highest for green leaves. The mode time of passage through the gut was two hours for all treatments. Ingestion of leaves occurred after two or three days for green leaves, and on the same day for one-, two- and three-month-old leaves. The speed of passage of leaves with different decomposition stages through the gut does not differ significantly when animals are fed continuously. However, it is possible that the amount retained in the gut during starvation differs depending on food quality. The digestibility value was corrected using a second food source to empty the gut of previously ingested food, so that all of the food from the experiment was egested. The digestibility value was highest for green leaves, whereas it was approximately 20% for all other stages. This was expected given that digestibility declines during decomposition as the metabolite content of the leaves decreases. The phenolic content was highest in the green leaves and lowest in three-month-old leaves. The flavonoid content was highest in green leaves and lowest after two months of decomposition. Animals ingested more phenolics when consumption was highest. The estimated amount of ingested flavonoids followed the same trend as assimilation rate. Flavonoids accounted for a large portion of total phenolics, and the estimated amount of flavonoids consumed was similar for one-, two- and three-month-old leaves. Our results suggest that the high phenolic and flavonoid concentrations in green leaves are feeding deterrents. Isopods may discriminate among concentrations of flavonoids and modify their consumption rates to maintain their intake of flavonoids when ingesting leaves with lower flavonoid content. PMID:22536111
Xia, Lei; Wu, Fu-Zhong; Yang, Wan-Qin; Tan, Bo
2012-02-01
In order to quantify the contribution of soil fauna to the decomposition of birch (Betula albosinensis) leaf litter in subalpine forests in western Sichuan of Southwest China during freeze-thaw season, a field experiment with different mesh sizes (0.02, 0.125, 1 and 3 mm) of litterbags was conducted in a representative birch-fir (Abies faxoniana) forest to investigate the mass loss rate of the birch leaf litter from 26 October, 2010 to 18 April, 2011, and the contributions of micro-, meso- and macro-fauna to the decomposition of the leaf litter. Over the freeze-thaw season, 11.8%, 13.2%, 15.4% and 19.5% of the mass loss were detected in the litterbags with 0.02, 0. 125, 1 and 3 mm mesh sizes, respectively. The total contribution of soil fauna to the litter decomposition accounted for 39.5% of the mass loss, and the taxa and individual relative density of the soil fauna in the litterbags had the similar variation trend with that of the mass loss rate. The contribution rate of soil fauna to the leaf litter mass loss showed the order of micro- < meso- < macro-fauna, with the highest contribution of micro-fauna (7.9%), meso-fauna (11.9%), and macro-fauna (22.7%) at the onset of freezing stage, deeply frozen stage, and thawing stage, respectively. The results demonstrated that soil fauna played an important role in the litter decomposition in subalpine forests of western Sichuan during freeze-thaw season.
Process for remediation of plastic waste
Pol, Vilas G [Westmont, IL; Thiyagarajan, Pappannan [Germantown, MD
2012-04-10
A single step process for degrading plastic waste by converting the plastic waste into carbonaceous products via thermal decomposition of the plastic waste by placing the plastic waste into a reactor, heating the plastic waste under an inert or air atmosphere until the temperature of 700.degree. C. is achieved, allowing the reactor to cool down, and recovering the resulting decomposition products therefrom. The decomposition products that this process yields are carbonaceous materials, and more specifically egg-shaped and spherical-shaped solid carbons. Additionally, in the presence of a transition metal compound, this thermal decomposition process produces multi-walled carbon nanotubes.
Optimizing LX-17 Thermal Decomposition Model Parameters with Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Moore, Jason; McClelland, Matthew; Tarver, Craig; Hsu, Peter; Springer, H. Keo
2017-06-01
We investigate and model the cook-off behavior of LX-17 because this knowledge is critical to understanding system response in abnormal thermal environments. Thermal decomposition of LX-17 has been explored in conventional ODTX (One-Dimensional Time-to-eXplosion), PODTX (ODTX with pressure-measurement), TGA (thermogravimetric analysis), and DSC (differential scanning calorimetry) experiments using varied temperature profiles. These experimental data are the basis for developing multiple reaction schemes with coupled mechanics in LLNL's multi-physics hydrocode, ALE3D (Arbitrary Lagrangian-Eulerian code in 2D and 3D). We employ evolutionary algorithms to optimize reaction rate parameters on high performance computing clusters. Once experimentally validated, this model will be scalable to a number of applications involving LX-17 and can be used to develop more sophisticated experimental methods. Furthermore, the optimization methodology developed herein should be applicable to other high explosive materials. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC.
NASA Astrophysics Data System (ADS)
van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko
2017-11-01
Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.
Florentino B. De la Cruz; Daniel J. Yelle; Hanna S. Gracz; Morton A. Barlaz
2014-01-01
The anaerobic decomposition of plant biomass is an important aspect of global organic carbon cycling. While the anaerobic metabolism of cellulose and hemicelluloses to methane and carbon dioxide are well-understood, evidence for the initial stages of lignin decomposition is fragmentary. The objective of this study was to look for evidence of chemical transformations of...
Microbial ecological succession during municipal solid waste decomposition.
Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A
2018-04-28
The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.
Controlled-source seismic interferometry with one way wave fields
NASA Astrophysics Data System (ADS)
van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.
2008-12-01
In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.
Kuesten, Carla; Bi, Jian
2018-06-03
Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.
Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi
2018-02-01
Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.
Optimal domain decomposition strategies
NASA Technical Reports Server (NTRS)
Yoon, Yonghyun; Soni, Bharat K.
1995-01-01
The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
NASA Astrophysics Data System (ADS)
Bérubé, Charles L.; Chouteau, Michel; Shamsipour, Pejman; Enkin, Randolph J.; Olivo, Gema R.
2017-08-01
Spectral induced polarization (SIP) measurements are now widely used to infer mineralogical or hydrogeological properties from the low-frequency electrical properties of the subsurface in both mineral exploration and environmental sciences. We present an open-source program that performs fast multi-model inversion of laboratory complex resistivity measurements using Markov-chain Monte Carlo simulation. Using this stochastic method, SIP parameters and their uncertainties may be obtained from the Cole-Cole and Dias models, or from the Debye and Warburg decomposition approaches. The program is tested on synthetic and laboratory data to show that the posterior distribution of a multiple Cole-Cole model is multimodal in particular cases. The Warburg and Debye decomposition approaches yield unique solutions in all cases. It is shown that an adaptive Metropolis algorithm performs faster and is less dependent on the initial parameter values than the Metropolis-Hastings step method when inverting SIP data through the decomposition schemes. There are no advantages in using an adaptive step method for well-defined Cole-Cole inversion. Finally, the influence of measurement noise on the recovered relaxation time distribution is explored. We provide the geophysics community with a open-source platform that can serve as a base for further developments in stochastic SIP data inversion and that may be used to perform parameter analysis with various SIP models.
Pedersen, Kristine S. K.; Aanen, Duur K.
2017-01-01
ABSTRACT Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ. Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. PMID:29269491
da Costa, Rafael R; Hu, Haofu; Pilgaard, Bo; Vreeburg, Sabine M E; Schückel, Julia; Pedersen, Kristine S K; Kračun, Stjepan K; Busk, Peter K; Harholt, Jesper; Sapountzis, Panagiotis; Lange, Lene; Aanen, Duur K; Poulsen, Michael
2018-03-01
Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. Copyright © 2018 da Costa et al.
Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.
Hu, Mian; Chen, Zhihua; Guo, Dabin; Liu, Cuixia; Xiao, Bo; Hu, Zhiquan; Liu, Shiming
2015-02-01
The pyrolysis process of two microalgae, Chlorella pyrenoidosa (CP) and bloom-forming cyanobacteria (CB) was examined by thermo-gravimetry to investigate their thermal decomposition behavior under non-isothermal conditions. It has found that the pyrolysis of both microalgae consists of three stages and stage II is the major mass reduction stage with mass loss of 70.69% for CP and 64.43% for CB, respectively. The pyrolysis kinetics of both microalgae was further studied using single-step global model (SSGM) and distributed activation energy model (DAEM). The mean apparent activation energy of CP and CB in SSGM was calculated as 143.71 and 173.46 kJ/mol, respectively. However, SSGM was not suitable for modeling pyrolysis kinetic of both microalgae due to the mechanism change during conversion. The DAEM with 200 first-order reactions showed an excellent fit between simulated data and experimental results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Decomposition of toluene in a steady-state atmospheric-pressure glow discharge
NASA Astrophysics Data System (ADS)
Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.; Trushkin, N. I.; Akishev, Yu. S.
2013-02-01
Results are presented from experimental studies of decomposition of toluene (C6H5CH3) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C6H5CH3 removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N2: O2: H2O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge is demonstrated. The main mechanisms of the influence of humidity on C6H5CH3 decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C6H5CH3 is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.
NASA Astrophysics Data System (ADS)
Jain, Shobhit; Tiso, Paolo; Haller, George
2018-06-01
We apply two recently formulated mathematical techniques, Slow-Fast Decomposition (SFD) and Spectral Submanifold (SSM) reduction, to a von Kármán beam with geometric nonlinearities and viscoelastic damping. SFD identifies a global slow manifold in the full system which attracts solutions at rates faster than typical rates within the manifold. An SSM, the smoothest nonlinear continuation of a linear modal subspace, is then used to further reduce the beam equations within the slow manifold. This two-stage, mathematically exact procedure results in a drastic reduction of the finite-element beam model to a one-degree-of freedom nonlinear oscillator. We also introduce the technique of spectral quotient analysis, which gives the number of modes relevant for reduction as output rather than input to the reduction process.
ERIC Educational Resources Information Center
Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan
2015-01-01
In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…
Influence of growth conditions on subsequent submonolayer oxide decomposition on Si(111)
NASA Astrophysics Data System (ADS)
Shklyaev, A. A.; Aono, Masakazu; Suzuki, Takanori
1996-10-01
The decomposition kinetics of oxide with a coverage between 0.1 and 0.5 ML, grown by oxidation of the Si(111)-7×7 surface at temperatures between 550 and 800 °C for oxygen pressures (Pox) between 3×10-8 and 2×10-6 Torr, is investigated with optical second-harmonic generation. Through the analysis of the pressure dependence of the initial oxide-growth rate, we separate the conditions for a slow oxide growth at Pox near Ptr(T) and for a rapid oxide growth at Pox>3Ptr(T), where Ptr(T) is the transition pressure to Si-etching regime without oxide growth. For the rapidly grown oxide, the oxide decomposition rate decreases with increasing oxide coverage, whereas the activation energy of about 3 eV does not change significantly. While in the case when the oxide is desorbed at the same temperature as are used for oxide growth, the oxide decomposition is described by an apparent activation energy of 1.5 eV. For the slowly grown oxide of 0.1 ML coverage, the oxide desorption kinetics shows a rapid decomposition stage followed by a slow stage. For the slowly grown oxide of 0.3 ML coverage, the slow stage with a large activation energy of 4.1 eV becomes dominant in the latter part of decomposition. The dependence of the desorption kinetics on the oxide-growth conditions described here could be a reason for the scattering of the kinetic parameters in the literature for O2 interaction with silicon at elevated temperatures.
Vasconcelos, Simao D.; Cruz, Tadeu M.; Salgado, Roberta L.; Thyssen, Patricia J.
2013-01-01
This study aimed to provide the first checklist of forensically-important dipteran species in a rainforest environment in Northeastern Brazil, a region exposed to high rates of homicides. Using a decomposing pig, Sus scrofa L. (Artiodactyla: Suidae), carcass as a model, adult flies were collected immediately after death and in the early stages of carcass decomposition. To confirm actual colonization of the carcass, insects that completed their larval development on the resource were also collected and reared until adult stage. A diverse assemblage of dipterans composed of at least 28 species from seven families with necrophagous habits was observed within minutes after death. Besides Calliphoridae and Sarcophagidae, species from forensically-important families such as Phoridae, Anthomyiidae, and Fanniidae were also registered. Eleven species were shown to complete their development on the carcass. The majority of individuals emerged from larvae collected at the dry stage of decomposition. Hemilucilia segmentaria Fabricius (Diptera: Calliphoridae), H. semidiaphana (Rondani), and Ophyra chalcogaster (Wiedemann) (Muscidae) were the dominant species among the colonizers, which supports their importance as forensic evidence in Brazil. PMID:24787899
NASA Astrophysics Data System (ADS)
Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang
2017-09-01
This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.
NASA Astrophysics Data System (ADS)
Li, D.; Fang, N. Z.
2017-12-01
Dallas-Fort Worth Metroplex (DFW) has a population of over 7 million depending on many water supply reservoirs. The reservoir inflow plays a vital role in water supply decision making process and long-term strategic planning for the region. This paper demonstrates a method of utilizing deep learning algorithms and multi-general circulation model (GCM) platform to forecast reservoir inflow for three reservoirs within the DFW: Eagle Mountain Lake, Lake Benbrook and Lake Arlington. Ensemble empirical mode decomposition was firstly employed to extract the features, which were then represented by the deep belief networks (DBNs). The first 75 years of the historical data (1940 -2015) were used to train the model, while the last 2 years of the data (2016-2017) were used for the model validation. The weights of each DBN gained from the training process were then applied to establish a neural network (NN) that was able to forecast reservoir inflow. Feature predictors used for the forecasting model were generated from weather forecast results of the downscaled multi-GCM platform for the North Texas region. By comparing root mean square error (RMSE) and mean bias error (MBE) with the observed data, the authors found that the deep learning with downscaled multi-GCM platform is an effective approach in the reservoir inflow forecasting.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M
2018-01-01
Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.
Kang, Guangliang; Du, Li; Zhang, Hong
2016-06-22
The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.
Bacterial Community Succession in Pine-Wood Decomposition.
Kielak, Anna M; Scheublin, Tanja R; Mendes, Lucas W; van Veen, Johannes A; Kuramae, Eiko E
2016-01-01
Though bacteria and fungi are common inhabitants of decaying wood, little is known about the relationship between bacterial and fungal community dynamics during natural wood decay. Based on previous studies involving inoculated wood blocks, strong fungal selection on bacteria abundance and community composition was expected to occur during natural wood decay. Here, we focused on bacterial and fungal community compositions in pine wood samples collected from dead trees in different stages of decomposition. We showed that bacterial communities undergo less drastic changes than fungal communities during wood decay. Furthermore, we found that bacterial community assembly was a stochastic process at initial stage of wood decay and became more deterministic in later stages, likely due to environmental factors. Moreover, composition of bacterial communities did not respond to the changes in the major fungal species present in the wood but rather to the stage of decay reflected by the wood density. We concluded that the shifts in the bacterial communities were a result of the changes in wood properties during decomposition and largely independent of the composition of the wood-decaying fungal communities.
Bacterial Community Succession in Pine-Wood Decomposition
Kielak, Anna M.; Scheublin, Tanja R.; Mendes, Lucas W.; van Veen, Johannes A.; Kuramae, Eiko E.
2016-01-01
Though bacteria and fungi are common inhabitants of decaying wood, little is known about the relationship between bacterial and fungal community dynamics during natural wood decay. Based on previous studies involving inoculated wood blocks, strong fungal selection on bacteria abundance and community composition was expected to occur during natural wood decay. Here, we focused on bacterial and fungal community compositions in pine wood samples collected from dead trees in different stages of decomposition. We showed that bacterial communities undergo less drastic changes than fungal communities during wood decay. Furthermore, we found that bacterial community assembly was a stochastic process at initial stage of wood decay and became more deterministic in later stages, likely due to environmental factors. Moreover, composition of bacterial communities did not respond to the changes in the major fungal species present in the wood but rather to the stage of decay reflected by the wood density. We concluded that the shifts in the bacterial communities were a result of the changes in wood properties during decomposition and largely independent of the composition of the wood-decaying fungal communities. PMID:26973611
Chapman, Samantha K.; Newman, Gregory S.; Hart, Stephen C.; Schweitzer, Jennifer A.; Koch, George W.
2013-01-01
To what extent microbial community composition can explain variability in ecosystem processes remains an open question in ecology. Microbial decomposer communities can change during litter decomposition due to biotic interactions and shifting substrate availability. Though relative abundance of decomposers may change due to mixing leaf litter, linking these shifts to the non-additive patterns often recorded in mixed species litter decomposition rates has been elusive, and links community composition to ecosystem function. We extracted phospholipid fatty acids (PLFAs) from single species and mixed species leaf litterbags after 10 and 27 months of decomposition in a mixed conifer forest. Total PLFA concentrations were 70% higher on litter mixtures than single litter types after 10 months, but were only 20% higher after 27 months. Similarly, fungal-to-bacterial ratios differed between mixed and single litter types after 10 months of decomposition, but equalized over time. Microbial community composition, as indicated by principal components analyses, differed due to both litter mixing and stage of litter decomposition. PLFA biomarkers a15∶0 and cy17∶0, which indicate gram-positive and gram-negative bacteria respectively, in particular drove these shifts. Total PLFA correlated significantly with single litter mass loss early in decomposition but not at later stages. We conclude that litter mixing alters microbial community development, which can contribute to synergisms in litter decomposition. These findings advance our understanding of how changing forest biodiversity can alter microbial communities and the ecosystem processes they mediate. PMID:23658639
NASA Astrophysics Data System (ADS)
Bernard, Jairus Daniel
Lightweight structural components are important to the automotive and aerospace industries so that better fuel economy can be realized. Magnesium alloys in particular are being examined to fulfill this need due to their attractive stiffness- and strength-to-weight ratios when compared to other materials. However, when introducing a material into new roles, one needs to properly characterize its mechanical properties. Fatigue behavior is especially important considering aerospace and automotive component applications. Therefore, quantifying the structure-property relationships and accurately predicting the fatigue behavior for these materials are vital. This study has two purposes. The first is to quantify the structure-property relationships for the fatigue behavior in an AM30 magnesium alloy. The second is to use the microstructural-based MultiStage Fatigue (MSF) model in order to accurately predict the fatigue behavior of three magnesium alloys: AM30, Elektron 21, and AZ61. While some studies have previously quantified the MSF material constants for several magnesium alloys, detailed research into the fatigue regimes, notably the microstructurally small crack (MSC) region, is lacking. Hence, the contribution of this work is the first of its kind to experimentally quantify the fatigue crack incubation and MSC regimes that are used for the MultiStage Fatigue model. Using a multi-faceted experimental approach, these regimes were explored with a replica method that used a dual-stage silicone based compound along with previously published in situ fatigue tests. These observations were used in calibrating the MultiStage Fatigue model.
Management intensity alters decomposition via biological pathways
Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory
2011-01-01
Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future efforts to more accurately predict soil carbon dynamics under different management regimes may need to explicitly consider how changes in litter chemistry during decomposition are influenced by the specific metabolic capabilities of the extant decomposer communities.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
NASA Astrophysics Data System (ADS)
Kafka, Orion L.; Yu, Cheng; Shakoor, Modesar; Liu, Zeliang; Wagner, Gregory J.; Liu, Wing Kam
2018-04-01
A data-driven mechanistic modeling technique is applied to a system representative of a broken-up inclusion ("stringer") within drawn nickel-titanium wire or tube, e.g., as used for arterial stents. The approach uses a decomposition of the problem into a training stage and a prediction stage. It is applied to compute the fatigue crack incubation life of a microstructure of interest under high-cycle fatigue. A parametric study of a matrix-inclusion-void microstructure is conducted. The results indicate that, within the range studied, a larger void between halves of the inclusion increases fatigue life, while larger inclusion diameter reduces fatigue life.
Proof of a new colour decomposition for QCD amplitudes
Melia, Tom
2015-12-16
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Proof of a new colour decomposition for QCD amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melia, Tom
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
NASA Astrophysics Data System (ADS)
Larionova, A. A.; Maltseva, A. N.; Lopes de Gerenyu, V. O.; Kvitkina, A. K.; Bykhovets, S. S.; Zolotareva, B. N.; Kudeyarov, V. N.
2017-04-01
The mineralization and humification of leaf litter collected in a mixed forest of the Prioksko-Terrasny Reserve depending on temperature (2, 12, and 22°C) and moisture (15, 30, 70, 100, and 150% of water holding capacity ( WHC)) has been studied in long-term incubation experiments. Mineralization is the most sensitive to temperature changes at the early stage of decomposition; the Q 10 value at the beginning of the experiment (1.5-2.7) is higher than at the later decomposition stages (0.3-1.3). Carbon losses usually exceed nitrogen losses during decomposition. Intensive nitrogen losses are observed only at the high temperature and moisture of litter (22°C and 100% WHC). Humification determined from the accumulation of humic substances in the end of incubation decreases from 34 to 9% with increasing moisture and temperature. The degree of humification CHA/CFA is maximum (1.14) at 12°C and 15% WHC; therefore, these temperature and moisture conditions are considered optimal for humification. Humification calculated from the limit value of litter mineralization is almost independent of temperature, but it significantly decreases from 70 to 3% with increasing moisture. A possible reason for the difference between the humification values measured by two methods is the conservation of a significant part of hemicelluloses, cellulose, and lignin during the transformation of litter and the formation of a complex of humic substances with plant residues, where HSs fulfill a protectoral role and decrease the decomposition rate of plant biopolymers.
Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios
NASA Technical Reports Server (NTRS)
Helm, B. Robert; Fickas, Stephen
1992-01-01
Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.
Pascual, Javier; von Hoermann, Christian; Rottler-Hoermann, Ann-Marie; Nevo, Omer; Geppert, Alicia; Sikorski, Johannes; Huber, Katharina J; Steiger, Sandra; Ayasse, Manfred; Overmann, Jörg
2017-08-01
The decomposition of dead mammalian tissue involves a complex temporal succession of epinecrotic bacteria. Microbial activity may release different cadaveric volatile organic compounds which in turn attract other key players of carcass decomposition such as scavenger insects. To elucidate the dynamics and potential functions of epinecrotic bacteria on carcasses, we monitored bacterial communities developing on still-born piglets incubated in different forest ecosystems by combining high-throughput Illumina 16S rRNA sequencing with gas chromatography-mass spectrometry of volatiles. Our results show that the community structure of epinecrotic bacteria and the types of cadaveric volatile compounds released over the time course of decomposition are driven by deterministic rather than stochastic processes. Individual cadaveric volatile organic compounds were correlated with specific taxa during the first stages of decomposition which are dominated by bacteria. Through best-fitting multiple linear regression models, the synthesis of acetic acid, indole and phenol could be linked to the activity of Enterobacteriaceae, Tissierellaceae and Xanthomonadaceae, respectively. These conclusions are also commensurate with the metabolism described for the dominant taxa identified for these families. The predictable nature of in situ synthesis of cadaveric volatile organic compounds by epinecrotic bacteria provides a new basis for future chemical ecology and forensic studies. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.
Multi-site precipitation downscaling using a stochastic weather generator
NASA Astrophysics Data System (ADS)
Chen, Jie; Chen, Hua; Guo, Shenglian
2018-03-01
Statistical downscaling is an efficient way to solve the spatiotemporal mismatch between climate model outputs and the data requirements of hydrological models. However, the most commonly-used downscaling method only produces climate change scenarios for a specific site or watershed average, which is unable to drive distributed hydrological models to study the spatial variability of climate change impacts. By coupling a single-site downscaling method and a multi-site weather generator, this study proposes a multi-site downscaling approach for hydrological climate change impact studies. Multi-site downscaling is done in two stages. The first stage involves spatially downscaling climate model-simulated monthly precipitation from grid scale to a specific site using a quantile mapping method, and the second stage involves the temporal disaggregating of monthly precipitation to daily values by adjusting the parameters of a multi-site weather generator. The inter-station correlation is specifically considered using a distribution-free approach along with an iterative algorithm. The performance of the downscaling approach is illustrated using a 10-station watershed as an example. The precipitation time series derived from the National Centers for Environment Prediction (NCEP) reanalysis dataset is used as the climate model simulation. The precipitation time series of each station is divided into 30 odd years for calibration and 29 even years for validation. Several metrics, including the frequencies of wet and dry spells and statistics of the daily, monthly and annual precipitation are used as criteria to evaluate the multi-site downscaling approach. The results show that the frequencies of wet and dry spells are well reproduced for all stations. In addition, the multi-site downscaling approach performs well with respect to reproducing precipitation statistics, especially at monthly and annual timescales. The remaining biases mainly result from the non-stationarity of NCEP precipitation. Overall, the proposed approach is efficient for generating multi-site climate change scenarios that can be used to investigate the spatial variability of climate change impacts on hydrology.
NASA Astrophysics Data System (ADS)
Lu, C.
2017-12-01
This study utilized field outcrops, thin sections, geochemical data, and GR logging curves to investigate the development model of paleokarst within the Longwangmiao Formation in the Lower Cambrian, western Central Yangtze Block, SW China. The Longwangmiao Formation, which belongs to a third-order sequence, consists of four forth-order sequences and is located in the uppermost part of the Lower Cambrian. The vertical variations of the δ13C and δ18O values indicate the existence of multi-stage eogenetic karst events. The eogenetic karst event in the uppermost part of the Longwangmiao Formation is recognized by the dripstones developed within paleocaves, vertical paleoweathering crust with four zones (bedrock, a weak weathering zone, an intense weathering zone and a solution collapsed zone), two generations of calcsparite cement showing bright luminescence and a zonation from nonluminescent to bright to nonluminescent, two types breccias (matrix-rich clast-supported chaotic breccia and matrix-supported chaotic breccia) and rundkarren. The episodic variations of stratiform dissolution vugs and breccias in vertical, and facies-controlled dissolution and filling features indicated the development of multi-stages eogenetic karst. The development of the paleokarst model is controlled by multi-level sea-level changes. The long eccentricity cycle dictates the fluctuations of the forth-order sea-level, generating multi-stage eogenetic karst events. The paleokarst model is an important step towards better understanding the link between the probably orbitally forced sea-level oscillations and eogenetic karst in the Lower Cambrian. According to this paleokarst model, hydrocarbon exploration should focus on both the karst highlands and the karst transitional zone.
Forbes, Shari L.; Perrault, Katelynn A.; Stefanuto, Pierre-Hugues; Nizio, Katie D.; Focant, Jean-François
2014-01-01
The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography – time-of-flight mass spectrometry (GC×GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC×GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs. PMID:25412504
Forbes, Shari L; Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Nizio, Katie D; Focant, Jean-François
2014-01-01
The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography--time-of-flight mass spectrometry (GC × GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC × GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs.
Process for remediation of plastic waste
Pol, Vilas G; Thiyagarajan, Pappannan
2013-11-12
A single step process for degrading plastic waste by converting the plastic waste into carbonaceous products via thermal decomposition of the plastic waste by placing the plastic waste into a reactor, heating the plastic waste under an inert or air atmosphere until the temperature of about 700.degree. C. is achieved, allowing the reactor to cool down, and recovering the resulting decomposition products therefrom. The decomposition products that this process yields are carbonaceous materials, and more specifically carbon nanotubes having a partially filled core (encapsulated) adjacent to one end of the nanotube. Additionally, in the presence of a transition metal compound, this thermal decomposition process produces multi-walled carbon nanotubes.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics
NASA Technical Reports Server (NTRS)
Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel
2008-01-01
This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
The persistence of human DNA in soil following surface decomposition.
Emmons, Alexandra L; DeBruyn, Jennifer M; Mundorff, Amy Z; Cobaugh, Kelly L; Cabana, Graciela S
2017-09-01
Though recent decades have seen a marked increase in research concerning the impact of human decomposition on the grave soil environment, the fate of human DNA in grave soil has been relatively understudied. With the purpose of supplementing the growing body of literature in forensic soil taphonomy, this study assessed the relative persistence of human DNA in soil over the course of decomposition. Endpoint PCR was used to assess the presence or absence of human nuclear and mitochondrial DNA, while qPCR was used to evaluate the quantity of human DNA recovered from the soil beneath four cadavers at the University of Tennessee's Anthropology Research Facility (ARF). Human nuclear DNA from the soil was largely unrecoverable, while human mitochondrial DNA was detectable in the soil throughout all decomposition stages. Mitochondrial DNA copy abundances were not significantly different between decomposition stages and were not significantly correlated to soil edaphic parameters tested. There was, however, a significant positive correlation between mitochondrial DNA copy abundances and the human associated bacteria, Bacteroides, as estimated by 16S rRNA gene abundances. These results show that human mitochondrial DNA can persist in grave soil and be consistently detected throughout decomposition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Long-Term Simulated Atmospheric Nitrogen Deposition Alters ...
Atmospheric nitrogen deposition has been suggested to increase forest carbon sequestration across much of the Northern Hemisphere; slower organic matter decomposition could contribute to this increase. At four sugar maple (Acer saccharum)-dominated northern hardwood forests, we previously observed that 10 years of chronic simulated nitrogen deposition (30 kg N ha-1 yr-1) increased soil organic carbon. Over three years at these sites, we investigated the effects of nitrogen additions on decomposition of two substrates with documented differences in biochemistry: leaf litter (more labile) and fine roots (more recalcitrant). Further, we combined decomposition rates with annual leaf and fine root litter production to estimate how nitrogen additions altered the accumulation of soil organic matter. Nitrogen additions marginally stimulated early-stage decomposition of leaf litter, a substrate with little acid-insoluble material (e.g., lignin). In contrast, nitrogen additions inhibited the late stage decomposition of fine roots, a substrate with high amount of acid insoluble material and a change consistent with observed decreases in lignin-degrading enzyme activities with nitrogen additions at these sites. At the ecosystem scale, the slower fine root decomposition led to additional root mass retention (g m-2), which explained 5, 48, and 52 % of previously-documented soil carbon accumulation due to nitrogen additions. Our results demonstrated that nitrogen deposition ha
Oliveira, Diego L; Soares, Thiago F; Vasconcelos, Simão D
2016-01-01
Insects associated with carrion can have parasitological importance as vectors of several pathogens and causal agents of myiasis to men and to domestic and wild animals. We tested the attractiveness of animal baits (chicken liver) at different stages of decomposition to necrophagous species of Diptera (Calliphoridae, Fanniidae, Muscidae, Phoridae and Sarcophagidae) in a rainforest fragment in Brazil. Five types of bait were used: fresh and decomposed at room temperature (26 °C) for 24, 48, 72 and 96 h. A positive correlation was detected between the time of decomposition and the abundance of Calliphoridae and Muscidae, whilst the abundance of adults of Phoridae decreased with the time of decomposition. Ten species of calliphorids were registered, of which Chrysomya albiceps, Chrysomya megacephala and Chloroprocta idioidea showed a positive significant correlation between abundance and decomposition. Specimens of Sarcophagidae and Fanniidae did not discriminate between fresh and highly decomposed baits. A strong female bias was registered for all species of Calliphoridae irrespective of the type of bait. The results reinforce the feasibility of using animal tissues as attractants to a wide diversity of dipterans of medical, parasitological and forensic importance in short-term surveys, especially using baits at intermediate stages of decomposition.
Dong, Fan; Zhao, Weirong; Wu, Zhongbiao; Guo, Sen
2009-03-15
Multi-type nitrogen doped TiO(2) nanoparticles were prepared by thermal decomposition of the mixture of titanium hydroxide and urea at 400 degrees C for 2h. The as-prepared photocatalysts were characterized by X-ray diffraction (XRD), high-resolution transmission electron microscopy (HRTEM), X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectra (UV-vis DRS), and photoluminescence (PL). The results showed that the as-prepared samples exhibited strong visible light absorption due to multi-type nitrogen doped in the form of substitutional (N-Ti-O and Ti-O-N) and interstitial (pi* character NO) states, which were 0.14 and 0.73 eV above the top of the valence band, respectively. A physical model of band structure was established to clarify the visible light photocatalytic process over the as-prepared samples. The photocatalytic activity was evaluated for the photodegradation of gaseous toluene under visible light irradiation. The activity of the sample prepared from wet titanium hydroxide and urea (TiO(2)-Nw, apparent reaction rate constant k = 0.045 min(-1)) was much higher than other samples including P25 (k = 0.0013 min(-1)). The high activity can be attributed to the results of the synergetic effects of strong visible light absorption, good crystallization, large surface hydroxyl groups, and enhanced separation of photoinduced carriers.
Multiphase flow models for hydraulic fracturing technology
NASA Astrophysics Data System (ADS)
Osiptsov, Andrei A.
2017-10-01
The technology of hydraulic fracturing of a hydrocarbon-bearing formation is based on pumping a fluid with particles into a well to create fractures in porous medium. After the end of pumping, the fractures filled with closely packed proppant particles create highly conductive channels for hydrocarbon flow from far-field reservoir to the well to surface. The design of the hydraulic fracturing treatment is carried out with a simulator. Those simulators are based on mathematical models, which need to be accurate and close to physical reality. The entire process of fracture placement and flowback/cleanup can be conventionally split into the following four stages: (i) quasi-steady state effectively single-phase suspension flow down the wellbore, (ii) particle transport in an open vertical fracture, (iii) displacement of fracturing fluid by hydrocarbons from the closed fracture filled with a random close pack of proppant particles, and, finally, (iv) highly transient gas-liquid flow in a well during cleanup. The stage (i) is relatively well described by the existing hydralics models, while the models for the other three stages of the process need revisiting and considerable improvement, which was the focus of the author’s research presented in this review paper. For stage (ii), we consider the derivation of a multi-fluid model for suspension flow in a narrow vertical hydraulic fracture at moderate Re on the scale of fracture height and length and also the migration of particles across the flow on the scale of fracture width. At the stage of fracture cleanaup (iii), a novel multi-continua model for suspension filtration is developed. To provide closure relationships for permeability of proppant packings to be used in this model, a 3D direct numerical simulation of single phase flow is carried out using the lattice-Boltzmann method. For wellbore cleanup (iv), we present a combined 1D model for highly-transient gas-liquid flow based on the combination of multi-fluid and drift-flux approaches. The derivation of the drift-flux model from conservation olaws is criticall revisited in order to define the list of underlying assumptions and to mark the applicability margins of the model. All these fundamental problems share the same technological application (hydraulic fracturing) and the same method of research, namely, the multi-fluid approach to multiphase flow modeling and the consistent use of asymptotic methods. Multi-fluid models are then discussed in comparison with semi-empirical (often postulated) models widely used in the industry.
NASA Astrophysics Data System (ADS)
Mao, J.; Chen, N.; Harmon, M. E.; Li, Y.; Cao, X.; Chappell, M.
2012-12-01
Advanced 13C solid-state NMR techniques were employed to study the chemical structural changes of litter decomposition across broad spatial and long time scales. The fresh and decomposed litter samples of four species (Acer saccharum (ACSA), Drypetes glauca (DRGL), Pinus resinosa (PIRE), and Thuja plicata (THPL)) incubated for up to 10 years at four sites under different climatic conditions (from Arctic to tropical forest) were examined. Decomposition generally led to an enrichment of cutin and surface wax materials, and a depletion of carbohydrates causing overall composition to become more similar compared with original litters. However, the changes of main constituents in the four litters were inconsistent with the four litters following different pathways of decomposition at the same site. As decomposition proceeded, waxy materials decreased at the early stage and then gradually increased in PIRE; DRGL showed a significant depletion of lignin and tannin while the changes of lignin and tannin were relative small and inconsistent for ACSA and THPL. In addition, the NCH groups, which could be associated with either fungal cell wall chitin or bacterial wall petidoglycan, were enriched in all litters except THPL. Contrary to the classic lignin-enrichment hypothesis, DRGL with low-quality C substrate had the highest degree of composition changes. Furthermore, some samples had more "advanced" compositional changes in the intermediate stage of decomposition than in the highly-decomposed stage. This pattern might be attributed to the formation of new cross-linking structures, that rendered substrates more complex and difficult for enzymes to attack. Finally, litter quality overrode climate and time factors as a control of long-term changes of chemical composition.
Herms, Daniel A
2017-01-01
Abstract Emerald ash borer (EAB; Agrilus planipennis Fairmaire) is an invasive wood-borer causing rapid, widespread ash tree mortality, formation of canopy gaps, and accumulation of coarse woody debris (CWD) in forest ecosystems. The objective of this study was to quantify the effects of canopy gaps and ash CWD on forest floor invertebrate communities during late stages of EAB-induced ash mortality, when the effects of gaps are predicted to be smallest and effects of CWD are predicted to be greatest, according to the model proposed by Perry and Herms 2016a. A 2-year study was conducted in forest stands that had experienced nearly 100% ash mortality in southeastern Michigan, USA, near where EAB first established in North America. In contrast to patterns documented during early stages of the EAB invasion, effects of gaps were minimal during late stages of ash mortality, but invertebrate communities were affected by accumulation and decomposition of CWD. Invertebrate activity-abundance, evenness, and diversity were highest near minimally decayed logs (decay class 1), but diverse taxon-specific responses to CWD affected community composition. Soil moisture class emerged as an important factor structuring invertebrate communities, often mediating the strength and direction of their responses to CWD and stages of decomposition. The results of this study were consistent with the predictions that the effects of CWD on invertebrate communities would be greater than those of canopy gaps during late stages of EAB-induced ash mortality. This research contributes to understanding of the cascading and long-term ecological impacts of invasive species on native forest ecosystems.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Multiscale synchrony behaviors of paired financial time series by 3D multi-continuum percolation
NASA Astrophysics Data System (ADS)
Wang, M.; Wang, J.; Wang, B. T.
2018-02-01
Multiscale synchrony behaviors and nonlinear dynamics of paired financial time series are investigated, in an attempt to study the cross correlation relationships between two stock markets. A random stock price model is developed by a new system called three-dimensional (3D) multi-continuum percolation system, which is utilized to imitate the formation mechanism of price dynamics and explain the nonlinear behaviors found in financial time series. We assume that the price fluctuations are caused by the spread of investment information. The cluster of 3D multi-continuum percolation represents the cluster of investors who share the same investment attitude. In this paper, we focus on the paired return series, the paired volatility series, and the paired intrinsic mode functions which are decomposed by empirical mode decomposition. A new cross recurrence quantification analysis is put forward, combining with multiscale cross-sample entropy, to investigate the multiscale synchrony of these paired series from the proposed model. The corresponding research is also carried out for two China stock markets as comparison.
NASA Astrophysics Data System (ADS)
Bozoev, A. M.; Demidova, E. A.
2016-03-01
At the moment, many fields of Western Siberia are in the later stages of development. In this regard, the multilayer fields are actually involved in the development of hard to recover reserves by conducting well interventions. However, most of these assets may not to be economical profitable without application of horizontal drilling and multi-stage hydraulic fracturing treatment. Moreover, location of frac ports in relative to each other, number of stages, volume of proppant per one stage are the main issues due to the fact that the interference effect could lead to the loss of oil production. The optimal arrangement of horizontal wells with multi-stage hydraulic fracture was defined in this paper. Several analytical approaches have been used to predict the started oil flow rate and chose the most appropriate for field C reservoir J1. However, none of the analytical equations could not take into account the interference effect and determine the optimum number of fractures. Therefore, the simulation modelling was used. Finally, the universal equation is derived for this field C, the reservoir J1. This tool could be used to predict the flow rate of the horizontal well with hydraulic fracturing treatment on the qualitative level without simulation model.
Thermal analysis applied to irradiated propolis
NASA Astrophysics Data System (ADS)
Matsuda, Andrea Harumi; Machado, Luci Brocardo; del Mastro, Nélida Lucia
2002-03-01
Propolis is a resinous hive product, collected by bees. Raw propolis requires a decontamination procedure and irradiation appears as a promising technique for this purpose. The valuable properties of propolis for food and pharmaceutical industries have led to increasing interest in its technological behavior. Thermal analysis is a chemical analysis that gives information about changes on heating of great importance for technological applications. Ground propolis samples were 60Co gamma irradiated with 0 and 10 kGy. Thermogravimetry curves shown a similar multi-stage decomposition pattern for both irradiated and unirradiated samples up to 600°C. Similarly, through differential scanning calorimetry , a coincidence of melting point of irradiated and unirradiated samples was found. The results suggest that the irradiation process do not interfere on the thermal properties of propolis when irradiated up to 10 kGy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
NASA Astrophysics Data System (ADS)
Links, Jon
2017-03-01
Solutions of the classical Yang-Baxter equation provide a systematic method to construct integrable quantum systems in an algebraic manner. A Lie algebra can be associated with any solution of the classical Yang-Baxter equation, from which commuting transfer matrices may be constructed. This procedure is reviewed, specifically for solutions without skew-symmetry. A particular solution with an exotic symmetry is identified, which is not obtained as a limiting expansion of the usual Yang-Baxter equation. This solution facilitates the construction of commuting transfer matrices which will be used to establish the integrability of a multi-species boson tunnelling model. The model generalises the well-known two-site Bose-Hubbard model, to which it reduces in the one-species limit. Due to the lack of an apparent reference state, application of the algebraic Bethe Ansatz to solve the model is prohibitive. Instead, the Bethe Ansatz solution is obtained by the use of operator identities and tensor product decompositions.
Untangling climatic and autogenic signals in peat records
NASA Astrophysics Data System (ADS)
Morris, Paul J.; Baird, Andrew J.; Young, Dylan M.; Swindles, Graeme T.
2016-04-01
Raised bogs contain potentially valuable information about Holocene climate change. However, autogenic processes may disconnect peatland hydrological behaviour from climate, and overwrite and degrade climatic signals in peat records. How can genuine climate signals be separated from autogenic changes? What level of detail of climatic information should we expect to be able to recover from peat-based reconstructions? We used an updated version of the DigiBog model to simulate peatland development and response to reconstructed Holocene rainfall and temperature reconstructions. The model represents key processes that are influential in peatland development and climate signal preservation, and includes a network of feedbacks between peat accumulation, decomposition, hydraulic structure and hydrological processes. It also incorporates the effects of temperature upon evapotranspiration, plant (litter) productivity and peat decomposition. Negative feedbacks in the model cause simulated water-table depths and peat humification records to exhibit homeostatic recovery from prescribed changes in rainfall, chiefly through changes in drainage. However, the simulated bogs show less resilience to changes in temperature, which cause lasting alterations to peatland structure and function and may therefore be more readily detectable in peat records. The network of feedbacks represented in DigiBog also provide both high- and low-pass filters for climatic information, meaning that the fidelity with which climate signals are preserved in simulated peatlands is determined by both the magnitude and the rate of climate change. Large-magnitude climatic events of an intermediate frequency (i.e., multi-decadal to centennial) are best preserved in the simulated bogs. We found that simulated humification records are further degraded by a phenomenon known as secondary decomposition. Decomposition signals are consistently offset from the climatic events that generate them, and decomposition records of dry-wet-dry climate sequences appear to be particularly vulnerable to overwriting. Our findings have direct implications not only for the interpretation of peat-based records of past climates, but also for understanding the likely vulnerability of peatland ecosystems and carbon stocks to future climate change.
Isayev, Olexandr; Gorb, Leonid; Qasim, Mo; Leszczynski, Jerzy
2008-09-04
CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane or HNIW) is a high-energy nitramine explosive. To improve atomistic understanding of the thermal decomposition of CL-20 gas and solid phases, we performed a series of ab initio molecular dynamics simulations. We found that during unimolecular decomposition, unlike other nitramines (e.g., RDX, HMX), CL-20 has only one distinct initial reaction channelhomolysis of the N-NO2 bond. We did not observe any HONO elimination reaction during unimolecular decomposition, whereas the ring-breaking reaction was followed by NO 2 fission. Therefore, in spite of limited sampling, that provides a mostly qualitative picture, we proposed here a scheme of unimolecular decomposition of CL-20. The averaged product population over all trajectories was estimated at four HCN, two to four NO2, two to four NO, one CO, and one OH molecule per one CL-20 molecule. Our simulations provide a detailed description of the chemical processes in the initial stages of thermal decomposition of condensed CL-20, allowing elucidation of key features of such processes as composition of primary reaction products, reaction timing, and Arrhenius behavior of the system. The primary reactions leading to NO2, NO, N 2O, and N2 occur at very early stages. We also estimated potential activation barriers for the formation of NO2, which essentially determines overall decomposition kinetics and effective rate constants for NO2 and N2. The calculated solid-phase decomposition pathways correlate with available condensed-phase experimental data.
Seasonal necrophagous insect community assembly during vertebrate carrion decomposition.
Benbow, M E; Lewis, A J; Tomberlin, J K; Pechal, J L
2013-03-01
Necrophagous invertebrates have been documented to be a predominant driver of vertebrate carrion decomposition; however, very little is understood about the assembly of these communities both within and among seasons. The objective of this study was to evaluate the seasonal differences in insect taxa composition, richness, and diversity on carrion over decomposition with the intention that such data will be useful for refining error estimates in forensic entomology. Sus scrofa (L.) carcasses (n = 3-6, depending on season) were placed in a forested habitat near Xenia, OH, during spring, summer, autumn, and winter. Taxon richness varied substantially among seasons but was generally lower (1-2 taxa) during early decomposition and increased (3-8 taxa) through intermediate stages of decomposition. Autumn and winter showed the highest richness during late decomposition. Overall, taxon richness was higher during active decay for all seasons. While invertebrate community composition was generally consistent among seasons, the relative abundance of five taxa significantly differed across seasons, demonstrating different source communities for colonization depending on the time of year. There were significantly distinct necrophagous insect communities for each stage of decomposition, and between summer and autumn and summer and winter, but the communities were similar between autumn and winter. Calliphoridae represented significant indicator taxa for summer and autumn but replaced by Coleoptera during winter. Here we demonstrated substantial variability in necrophagous communities and assembly on carrion over decomposition and among seasons. Recognizing this variation has important consequences for forensic entomology and future efforts to provide error rates for estimates of the postmortem interval using arthropod succession data as evidence during criminal investigations.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
A Multi-Band Uncertainty Set Based Robust SCUC With Spatial and Temporal Budget Constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Chenxi; Wu, Lei; Wu, Hongyu
2016-11-01
The dramatic increase of renewable energy resources in recent years, together with the long-existing load forecast errors and increasingly involved price sensitive demands, has introduced significant uncertainties into power systems operation. In order to guarantee the operational security of power systems with such uncertainties, robust optimization has been extensively studied in security-constrained unit commitment (SCUC) problems, for immunizing the system against worst uncertainty realizations. However, traditional robust SCUC models with single-band uncertainty sets may yield over-conservative solutions in most cases. This paper proposes a multi-band robust model to accurately formulate various uncertainties with higher resolution. By properly tuning band intervalsmore » and weight coefficients of individual bands, the proposed multi-band robust model can rigorously and realistically reflect spatial/temporal relationships and asymmetric characteristics of various uncertainties, and in turn could effectively leverage the tradeoff between robustness and economics of robust SCUC solutions. The proposed multi-band robust SCUC model is solved by Benders decomposition (BD) and outer approximation (OA), while taking the advantage of integral property of the proposed multi-band uncertainty set. In addition, several accelerating techniques are developed for enhancing the computational performance and the convergence speed. Numerical studies on a 6-bus system and the modified IEEE 118-bus system verify the effectiveness of the proposed robust SCUC approach for enhancing uncertainty modeling capabilities and mitigating conservativeness of the robust SCUC solution.« less
Decomposition of toluene in a steady-state atmospheric-pressure glow discharge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.
Results are presented from experimental studies of decomposition of toluene (C{sub 6}H{sub 5}CH{sub 3}) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C{sub 6}H{sub 5}CH{sub 3} removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N{sub 2}: O{sub 2}: H{sub 2}O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge ismore » demonstrated. The main mechanisms of the influence of humidity on C{sub 6}H{sub 5}CH{sub 3} decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C{sub 6}H{sub 5}CH{sub 3} is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.« less
Resolution of singularities for multi-loop integrals
NASA Astrophysics Data System (ADS)
Bogner, Christian; Weinzierl, Stefan
2008-04-01
We report on a program for the numerical evaluation of divergent multi-loop integrals. The program is based on iterated sector decomposition. We improve the original algorithm of Binoth and Heinrich such that the program is guaranteed to terminate. The program can be used to compute numerically the Laurent expansion of divergent multi-loop integrals regulated by dimensional regularisation. The symbolic and the numerical steps of the algorithm are combined into one program. Program summaryProgram title: sector_decomposition Catalogue identifier: AEAG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 47 506 No. of bytes in distributed program, including test data, etc.: 328 485 Distribution format: tar.gz Programming language: C++ Computer: all Operating system: Unix RAM: Depending on the complexity of the problem Classification: 4.4 External routines: GiNaC, available from http://www.ginac.de, GNU scientific library, available from http://www.gnu.org/software/gsl Nature of problem: Computation of divergent multi-loop integrals. Solution method: Sector decomposition. Restrictions: Only limited by the available memory and CPU time. Running time: Depending on the complexity of the problem.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Application of Direct Parallel Methods to Reconstruction and Forecasting Problems
NASA Astrophysics Data System (ADS)
Song, Changgeun
Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
USDA-ARS?s Scientific Manuscript database
Although permafrost soils contain vast stores of carbon, we know relatively little about the chemical composition of their constituent organic matter. Soil organic matter chemistry is an important predictor of decomposition rates, especially in the initial stages of decomposition. Permafrost, organi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Identifying parasitic and saprotrophic interactions of freshwater chytrids with a microalga
NASA Astrophysics Data System (ADS)
Ward, C.; Longcore, J. E.; Carney, L. T.; Mayali, X.; Pett-Ridge, J.; Thelen, M. P.; Stuart, R.
2016-12-01
Despite having long been regarded as ecologically insignificant, aquatic fungi may be key regulators of carbon cycling in phytoplankton-dominated freshwater ecosystems. For several decades, it has been known that through infection chytrids and other parasitic fungi can cause major declines in natural algal populations and the release of large quantities of organic matter into the water column. Additionally, as in other environments fungi may be critically important in the decomposition of refractory organic matter, although to our knowledge this has never been investigated in pelagic freshwater ecosystems. We have a limited understanding of how fungi can interact with phytoplankton or phytoplankton-derived organic matter, and logistical difficulties complicate their study in the environment. Here, we have developed a model green alga-chytrid system to characterize the interactions under varying host physiologies and to investigate how these interactions influence the physiological and metabolic outcomes of both members. Chytrid infection was clearly linked to algal growth stage in the fungal isolate belonging to Rhizophydiales with infectivity only in late cyst stage, while the isolate belonging to Paraphysoderma could infect in both early and late cyst stages. To test whether freshwater chytrids can metabolize algal-derived organic matter, fungal isolates were grown axenically in algal spent media from different growth stages. The Rhizophydiales isolate grew on algal exudate from early cyst stage, while the Paraphysodermaisolate grew on exudates from both growth stages. Ongoing work has focused on using biochemical and multi-omic approaches to study the mechanistic underpinnings of algal-fungal interactions and to better understand the factors contributing to growth stage- and strain-specific differences. Together, these findings suggest that fungi may play a dual role in regulating carbon cycling in freshwater ecosystems via parasitic and saprotrophic strategies. This research was supported by the U.S. DOE Office of Science through the Office of Biological and Environmental Research under FWP SCW1039 and the Office of Energy Efficiency and Renewable Energy under FWP 29886. Work was performed under the auspices of the U.S. Department of Energy under Contract DE-AC52-07NA27344.
Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.
2015-01-01
Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601
Linear and nonlinear variable selection in competing risks data.
Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng
2018-06-15
Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.
Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu
2014-07-01
The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.
Tracing nitrogen accumulation in decaying wood and examining its impact on wood decomposition rate
NASA Astrophysics Data System (ADS)
Rinne, Katja T.; Rajala, Tiina; Peltoniemi, Krista; Chen, Janet; Smolander, Aino; Mäkipää, Raisa
2016-04-01
Decomposition of dead wood, which is controlled primarily by fungi is important for ecosystem carbon cycle and has potentially a significant role in nitrogen fixation via diazotrophs. Nitrogen content has been found to increase with advancing wood decay in several studies; however, the importance of this increase to decay rate and the sources of external nitrogen remain unclear. Improved knowledge of the temporal dynamics of wood decomposition rate and nitrogen accumulation in wood as well as the drivers of the two processes would be important for carbon and nitrogen models dealing with ecosystem responses to climate change. To tackle these questions we applied several analytical methods on Norway spruce logs from Lapinjärvi, Finland. We incubated wood samples (density classes from I to V, n=49) in different temperatures (from 8.5oC to 41oC, n=7). After a common seven day pre-incubation period at 14.5oC, the bottles were incubated six days in their designated temperature prior to CO2 flux measurements with GC to determine the decomposition rate. N2 fixation was measured with acetylene reduction assay after further 48 hour incubation. In addition, fungal DNA, (MiSeq Illumina) δ15N and N% composition of wood for samples incubated at 14.5oC were determined. Radiocarbon method was applied to obtain age distribution for the density classes. The asymbiotic N2 fixation rate was clearly dependent on the stage of wood decay and increased from stage I to stage IV but was substantially reduced in stage V. CO2 production was highest in the intermediate decay stage (classes II-IV). Both N2 fixation and CO2 production were highly temperature sensitive having optima in temperature 25oC and 31oC, respectively. We calculated the variation of annual levels of respiration and N2 fixation per hectare for the study site, and used the latter data together with the 14C results to determine the amount of N2 accumulated in wood in time. The proportion of total nitrogen in wood originating from N2 increased from 0.4% (class I) to 22% (V). Despite significant N inputs, N2 fixation explained only 34%-57% of the increase in wood N content of classes III-V. The DNA results indicated that mycorrhizal colonization of wood could only partially explain the remaining increase in N content. However, majority of the samples contained one or more wood decomposing fungal species that have been reported to have the capability to produce rhizomorphs or mycelial cords used for scavenging nutrients from outside sources. Assuming that the remaining increase in N content was due to fungal activity, we modelled the δ15N variation of wood from class I to V and compared the modelled and measured δ15N values (r = 0.95, p<0.05). The increase in wood nitrogen content in time was observed to have a significant, positive impact on the respiration rate (I-IV: r = 0.57, p<0.01).
A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu
Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang
2017-10-01
Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.
NASA Astrophysics Data System (ADS)
Schleuss, Per-Marten; Heitkamp, Felix; Seeber, Elke; Spielvogel, Sandra; Miehe, Georg; Guggenberger, Georg; Kuzyakov, Yakov
2015-04-01
Kobresia grasslands of the Tibetan Plateau cover an area of ca. 450,000 km2. They are of high global and regional importance as they store large amounts of carbon (C) and nitrogen (N) and provide food for grazing animals. However, intensive grassland degradation in recent decades destroyed mainly the upper root-mat/soil horizon. This has dramatic consequences for SOC storage against the background of climate change and further grazing pressure. We investigated the impact of pasture degradation on SOC storage and hypothesized that SOC stocks strongly decreased due to a reduction of C-input by roots as consequence of vegetation cover loss by overgrazing, SOM decomposition and soil erosion. We selected a sequence of six degradation stages (DS1-6). As initial trigger of grassland degradation, the high grazing pressure reduces the ability of Kobresia pastures to recover from disturbances (e.g. by freezing and drying events, herbivory, trampling). Once the root mats are destroyed, the occurring root-mat cracks increase due to soil erosion, SOC decomposition and trampling activities of livestock. The SOC stocks and contents decreased along the degradation sequence from intact to highly disturbed stages. Carbon stocks declined from intact Kobresia root mats (DS1) to bare soil patches (DS6) by about 70%. The thickness of the upper soil horizons strongly declined from DS1 to DS6. Considering the bare soil patches (DS6) on average 10 cm of the most fertile topsoil were removed. This clearly suggests that soil erosion strongly contributed to SOC losses, especially from topsoil with highest SOC contents. A strong decrease of the vegetation cover (mainly K. pygmaea) demonstrated that soil degradation also resulted in die-back of K. pygmaea. Consequently, root biomass decreased along the degradation sequence (DS1-2 > DS3-4 > DS5-6), indicating lower belowground C input from roots. We found decreasing δ13C values with increasing degradation stages within the upper 20 cm of soil. Higher δ13C values were found for intact root mats (DS1), whereas lowest δ13C signatures occurred for the highly degraded stages (DS5-6). This observation seems to be unusual, because δ13C values are supposed to increase with increasing decomposition. However, the δ13C signatures agreed well with lignin contents, which increased along the degradation sequence. Since lignin is 13C depleted, the δ13C shift clearly indicates SOM decomposition and relative enrichment of lignin components. Using root biomass as indicator for C- input and δ13C values for SOM decomposition, we could explain 70% of decreasing SOC contents using a multiple linear regression model. We conclude that grassland and soil degradation led to large SOC loss due an absence of root C-input, SOM decomposition and soil erosion.
Leung, Kevin; Budzien, Joanne L
2010-07-07
The decomposition of ethylene carbonate (EC) during the initial growth of solid-electrolyte interphase (SEI) films at the solvent-graphitic anode interface is critical to lithium ion battery operations. Ab initio molecular dynamics simulations of explicit liquid EC/graphite interfaces are conducted to study these electrochemical reactions. We show that carbon edge terminations are crucial at this stage, and that achievable experimental conditions can lead to surprisingly fast EC breakdown mechanisms, yielding decomposition products seen in experiments but not previously predicted.
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
Boutkhoum, Omar; Hanine, Mohamed; Agouti, Tarik; Tikniouine, Abdessadek
2015-01-01
In this paper, we examine the issue of strategic industrial location selection in uncertain decision making environments for implanting new industrial corporation. In fact, the industrial location issue is typically considered as a crucial factor in business research field which is related to many calculations about natural resources, distributors, suppliers, customers, and most other things. Based on the integration of environmental, economic and social decisive elements of sustainable development, this paper presents a hybrid decision making model combining fuzzy multi-criteria analysis with analytical capabilities that OLAP systems can provide for successful and optimal industrial location selection. The proposed model mainly consists in three stages. In the first stage, a decision-making committee has been established to identify the evaluation criteria impacting the location selection process. In the second stage, we develop fuzzy AHP software based on the extent analysis method to assign the importance weights to the selected criteria, which allows us to model the linguistic vagueness, ambiguity, and incomplete knowledge. In the last stage, OLAP analysis integrated with multi-criteria analysis employs these weighted criteria as inputs to evaluate, rank and select the strategic industrial location for implanting new business corporation in the region of Casablanca, Morocco. Finally, a sensitivity analysis is performed to evaluate the impact of criteria weights and the preferences given by decision makers on the final rankings of strategic industrial locations.
Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.
Cockle, Diane L; Bell, Lynne S
2015-08-01
Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Corsini, Chiara; Baker, Catriona; Kung, Ethan; Schievano, Silvia; Arbia, Gregory; Baretta, Alessia; Biglino, Giovanni; Migliavacca, Francesco; Dubini, Gabriele; Pennati, Giancarlo; Marsden, Alison; Vignon-Clementel, Irene; Taylor, Andrew; Hsia, Tain-Yen; Dorfman, Adam
2014-01-01
In patients with congenital heart disease and a single ventricle (SV), ventricular support of the circulation is inadequate, and staged palliative surgery (usually 3 stages) is needed for treatment. In the various palliative surgical stages individual differences in the circulation are important and patient-specific surgical planning is ideal. In this study, an integrated approach between clinicians and engineers has been developed, based on patient-specific multi-scale models, and is here applied to predict stage 2 surgical outcomes. This approach involves four distinct steps: (1) collection of pre-operative clinical data from a patient presenting for SV palliation, (2) construction of the pre-operative model, (3) creation of feasible virtual surgical options which couple a three-dimensional model of the surgical anatomy with a lumped parameter model (LPM) of the remainder of the circulation and (4) performance of post-operative simulations to aid clinical decision making. The pre-operative model is described, agreeing well with clinical flow tracings and mean pressures. Two surgical options (bi-directional Glenn and hemi-Fontan operations) are virtually performed and coupled to the pre-operative LPM, with the hemodynamics of both options reported. Results are validated against postoperative clinical data. Ultimately, this work represents the first patient-specific predictive modeling of stage 2 palliation using virtual surgery and closed-loop multi-scale modeling.
NASA Astrophysics Data System (ADS)
Wu, Binlin
New near-infrared (NIR) diffuse optical tomography (DOT) approaches were developed to detect, locate, and image small targets embedded in highly scattering turbid media. The first approach, referred to as time reversal optical tomography (TROT), is based on time reversal (TR) imaging and multiple signal classification (MUSIC). The second approach uses decomposition methods of non-negative matrix factorization (NMF) and principal component analysis (PCA) commonly used in blind source separation (BSS) problems, and compare the outcomes with that of optical imaging using independent component analysis (OPTICA). The goal is to develop a safe, affordable, noninvasive imaging modality for detection and characterization of breast tumors in early growth stages when those are more amenable to treatment. The efficacy of the approaches was tested using simulated data, and experiments involving model media and absorptive, scattering, and fluorescent targets, as well as, "realistic human breast model" composed of ex vivo breast tissues with embedded tumors. The experimental arrangements realized continuous wave (CW) multi-source probing of samples and multi-detector acquisition of diffusely transmitted signal in rectangular slab geometry. A data matrix was generated using the perturbation in the transmitted light intensity distribution due to the presence of absorptive or scattering targets. For fluorescent targets the data matrix was generated using the diffusely transmitted fluorescence signal distribution from the targets. The data matrix was analyzed using different approaches to detect and characterize the targets. The salient features of the approaches include ability to: (a) detect small targets; (b) provide three-dimensional location of the targets with high accuracy (~within a millimeter or 2); and (c) assess optical strength of the targets. The approaches are less computation intensive and consequently are faster than other inverse image reconstruction methods that attempt to reconstruct the optical properties of every voxel of the sample volume. The location of a target was estimated to be the weighted center of the optical property of the target. Consequently, the locations of small targets were better specified than those of the extended targets. It was more difficult to retrieve the size and shape of a target. The fluorescent measurements seemed to provide better accuracy than the transillumination measurements. In the case of ex vivo detection of tumors embedded in human breast tissue, measurements using multiple wavelengths provided more robust results, and helped suppress artifacts (false positives) than that from single wavelength measurements. The ability to detect and locate small targets, speedier reconstruction, combined with fluorophore-specific multi-wavelength probing has the potential to make these approaches suitable for breast cancer detection and diagnosis.
Ramping up to the Biology Workbench: A Multi-Stage Approach to Bioinformatics Education
ERIC Educational Resources Information Center
Greene, Kathleen; Donovan, Sam
2005-01-01
In the process of designing and field-testing bioinformatics curriculum materials, we have adopted a three-stage, progressive model that emphasizes collaborative scientific inquiry. The elements of the model include: (1) context setting, (2) introduction to concepts, processes, and tools, and (3) development of competent use of technologically…
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Kharmanda, G
2016-11-01
A new strategy of multi-objective structural optimization is integrated into Austin-Moore prosthesis in order to improve its performance. The new resulting model is so-called Improved Austin-Moore. The topology optimization is considered as a conceptual design stage to sketch several kinds of hollow stems according to the daily loading cases. The shape optimization presents the detailed design stage considering several objectives. Here, A new multiplicative formulation is proposed as a performance scale in order to define the best compromise between several requirements. Numerical applications on 2D and 3D problems are carried out to show the advantages of the proposed model.
Substrate quality alters microbial mineralization of added substrate and soil organic carbon
NASA Astrophysics Data System (ADS)
Jagadamma, S.; Mayes, M. A.; Steinweg, J. M.; Schaeffer, S. M.
2014-03-01
The rate and extent of decomposition of soil organic carbon (SOC) is dependent on substrate chemistry and microbial dynamics. Our objectives were to understand the influence of substrate chemistry on microbial processing of carbon (C), and to use model fitting to quantify differences in pool sizes and mineralization rates. We conducted an incubation experiment for 270 days using four uniformly-labeled 14C substrates (glucose, starch, cinnamic acid and stearic acid) on four different soils (a temperate Mollisol, a tropical Ultisol, a sub-arctic Andisol, and an arctic Gelisol). The 14C labeling enabled us to separate CO2 respired from added substrates and from native SOC. Microbial gene copy numbers were quantified at days 4, 30 and 270 using quantitative polymerase chain reaction (qPCR). Substrate C respiration was always higher for glucose than other substrates. Soils with cinnamic and stearic acid lost more native SOC than glucose- and starch-amended soils, despite an initial delay in respiration. Cinnamic and stearic acid amendments also exhibited higher fungal gene copy numbers at the end of incubation compared to unamended soils. We found that 270 days was sufficient to model decomposition of simple substrates (glucose and starch) with three pools, but was insufficient for more complex substrates (cinnamic and stearic acid) and native SOC. This study reveals that substrate quality imparts considerable control on microbial decomposition of newly added and native SOC, and demonstrates the need for multi-year incubation experiments to constrain decomposition parameters for the most recalcitrant fractions of SOC and added substrates.
Analytical separations of mammalian decomposition products for forensic science: a review.
Swann, L M; Forbes, S L; Lewis, S W
2010-12-03
The study of mammalian soft tissue decomposition is an emerging area in forensic science, with a major focus of the research being the use of various chemical and biological methods to study the fate of human remains in the environment. Decomposition of mammalian soft tissue is a postmortem process that, depending on environmental conditions and physiological factors, will proceed until complete disintegration of the tissue. The major stages of decomposition involve complex reactions which result in the chemical breakdown of the body's main constituents; lipids, proteins, and carbohydrates. The first step to understanding this chemistry is identifying the compounds present in decomposition fluids and determining when they are produced. This paper provides an overview of decomposition chemistry and reviews recent advances in this area utilising analytical separation science. Copyright © 2010 Elsevier B.V. All rights reserved.
Studies on seasonal arthropod succession on carrion in the southeastern Iberian Peninsula.
Arnaldos, M I; Romera, E; Presa, J J; Luna, A; García, M D
2004-08-01
A global study of the sarcosaprophagous community that occurs in the southeastern Iberian Peninsula during all four seasons is made for the first time, and its diversity is described with reference to biological indices. A total of 18,179 adults and, additionally, a number of preimaginal states were collected. The results for the main arthropod groups, and their diversity are discussed in relation to the season and decompositional stages. The results provide an extensive inventory of carrion-associated arthropods. An association between decomposition stages and more representative arthropod groups is established. With respect to the biological indices applied, Margalef's index shows that the diversity of the community increases as the state of decomposition advances, while Sorenson's quantitative index shows that the greatest similarities are between spring and summer on the one hand, and fall and winter, on the other.
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.
Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga
2013-10-01
Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Applying a particle filtering technique for canola crop growth stage estimation in Canada
NASA Astrophysics Data System (ADS)
Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi
2017-10-01
Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.
NASA Astrophysics Data System (ADS)
Isidorov, Valery; Tyszkiewicz, Zofia; Pirożnikow, Ewa
2016-04-01
Leaf litter fungi are partly responsible for decomposition of dead material, nutrient mobilization and gas fluxes in forest ecosystems. It can be assumed that microbial destruction of dead plant materials is an important source of volatile organic compounds (VOCs) emitted into the atmosphere from terrestrial ecosystems. However, little information is available on both the composition of fungal VOCs and their producers whose community can be changed at different stages of litter decomposition. The fungal community succession was investigated in a litter bag experiment with Scots pine (Pinus sylvestris) and Norway spruce (Picea abies) needle litter. The succession process can be divided into a several stages controlled mostly by changes in litter quality. At the very first stages of decomposition the needle litter was colonized by ascomycetes which can use readily available carbohydrates. At the later stages, the predominance of Trichoderma sp., the known producers of cellulolytic enzymes, was documented. To investigate the fungi-derived VOCs, eight fungi species were isolated. As a result of gas chromatographic analyses, as many as 75C2sbnd C15 fungal volatile compounds were identified. Most components detected in emissions were very reactive substances: the principal groups of VOCs were formed by monoterpenes, carbonyl compounds and aliphatic alcohols. It was found that production of VOCs by fungi is species specific: only 10 metabolites were emitted into the gas phase by all eight species. The reported data confirm that the leave litter decomposition is important source of reactive organic compounds under the forest canopy.
Prado e Castro, Catarina; García, María Dolores; Martins da Silva, Pedro; Faria e Silva, Israel; Serrano, Artur
2013-10-10
Some Coleoptera are recognised as being forensically important as post-mortem interval (PMI) indicators, especially in the later stages of cadaver decomposition. Because insect species and their timings of appearance in cadavers vary according to geographic location, it is important to know their succession patterns, as well as seasonality at a regional level. In this study, we aimed to contribute to broaden this knowledge by surveying beetle communities from the Lisbon area during the four seasons of the year, using piglet carcasses as animal models. Five stages were recognised during the decomposition process and they could be separated taking into account the occurrence and abundance of the specific groups of Coleoptera collected. Decay stages in general recorded higher abundance and richness of beetle species. A total of 82 species were identified, belonging to 28 families, in a total of 1968 adult Coleoptera collected. Autumn yielded the highest values of species abundance and richness, while the lowest values were recorded during winter. Staphylinidae was the most abundant family in all seasons, although in spring and summer Dermestidae was also quite dominant. In general, most species were related to the decay stages, particularly Margarinotus brunneus (Histeridae) and Creophilus maxillosus (Staphylinidae), and also Saprinus detersus (Histeridae) and Thanatophilus sinuatus (Silphidae), while only few were related to the dry stage, namely Oligota pusillima (Staphylinidae) and Dermestidae spp. larvae. On the other hand, Anotylus complanatus and Atheta pertyi (Staphylinidae) were apparently more associated with the fresh and bloated stages, respectively. The presence of some species was markedly seasonal, allowing a season characterisation based on the occurrence of certain taxa, which can be useful for forensic purposes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-01-01
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-04-21
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.
Shedden, Kerby; Taylor, Jeremy M.G.; Enkemann, Steve A.; Tsao, Ming S.; Yeatman, Timothy J.; Gerald, William L.; Eschrich, Steve; Jurisica, Igor; Venkatraman, Seshan E.; Meyerson, Matthew; Kuick, Rork; Dobbin, Kevin K.; Lively, Tracy; Jacobson, James W.; Beer, David G.; Giordano, Thomas J.; Misek, David E.; Chang, Andrew C.; Zhu, Chang Qi; Strumpf, Dan; Hanash, Samir; Shepherd, Francis A.; Ding, Kuyue; Seymour, Lesley; Naoki, Katsuhiko; Pennell, Nathan; Weir, Barbara; Verhaak, Roel; Ladd-Acosta, Christine; Golub, Todd; Gruidl, Mike; Szoke, Janos; Zakowski, Maureen; Rusch, Valerie; Kris, Mark; Viale, Agnes; Motoi, Noriko; Travis, William; Sharma, Anupama
2009-01-01
Although prognostic gene expression signatures for survival in early stage lung cancer have been proposed, for clinical application it is critical to establish their performance across different subject populations and in different laboratories. Here we report a large, training-testing, multi-site blinded validation study to characterize the performance of several prognostic models based on gene expression for 442 lung adenocarcinomas. The hypotheses proposed examined whether microarray measurements of gene expression either alone or combined with basic clinical covariates (stage, age, sex) can be used to predict overall survival in lung cancer subjects. Several models examined produced risk scores that substantially correlated with actual subject outcome. Most methods performed better with clinical data, supporting the combined use of clinical and molecular information when building prognostic models for early stage lung cancer. This study also provides the largest available set of microarray data with extensive pathological and clinical annotation for lung adenocarcinomas. PMID:18641660
NASA Astrophysics Data System (ADS)
Afzal, Peyman; Mirzaei, Misagh; Yousefi, Mahyar; Adib, Ahmad; Khalajmasoumi, Masoumeh; Zarifi, Afshar Zia; Foster, Patrick; Yasrebi, Amir Bijan
2016-07-01
Recognition of significant geochemical signatures and separation of geochemical anomalies from background are critical issues in interpretation of stream sediment data to define exploration targets. In this paper, we used staged factor analysis in conjunction with the concentration-number (C-N) fractal model to generate exploration targets for prospecting Cr and Fe mineralization in Balvard area, SE Iran. The results show coexistence of derived multi-element geochemical signatures of the deposit-type sought and ultramafic-mafic rocks in the NE and northern parts of the study area indicating significant chromite and iron ore prospects. In this regard, application of staged factor analysis and fractal modeling resulted in recognition of significant multi-element signatures that have a high spatial association with host lithological units of the deposit-type sought, and therefore, the generated targets are reliable for further prospecting of the deposit in the study area.
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Chengjin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M. H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex. PMID:23405103
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Cheng-Jin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M.H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex.
Assessing the effect of different treatments on decomposition rate of dairy manure.
Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O
2016-11-01
Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Integrated multi-trophic aquaculture is a promising direction for the sustainable development of aquaculture. Instead of releasing nutrition-rich waste to the environment or decomposition of nutrients via the biofilter, the ‘waste’ from fish can be recycled to produce byproducts (e.g., algae, plants...
NASA Astrophysics Data System (ADS)
Voit, E. I.; Didenko, N. A.; Gaivoronskaya, K. A.
2018-03-01
Thermal decomposition of (NH4)2ZrF6 resulting in ZrO2 formation within the temperature range of 20°-750°C has been investigated by means of thermal and X-ray diffraction analysis and IR and Raman spectroscopy. It has been established that thermolysis proceeds in six stages. The vibrational-spectroscopy data for the intermediate products of thermal decomposition have been obtained, systematized, and summarized.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Liang, E-mail: liang.wang@unh.edu; Germaschewski, K.; Hakim, Ammar H.
2015-01-15
We introduce an extensible multi-fluid moment model in the context of collisionless magnetic reconnection. This model evolves full Maxwell equations and simultaneously moments of the Vlasov-Maxwell equation for each species in the plasma. Effects like electron inertia and pressure gradient are self-consistently embedded in the resulting multi-fluid moment equations, without the need to explicitly solving a generalized Ohm's law. Two limits of the multi-fluid moment model are discussed, namely, the five-moment limit that evolves a scalar pressures for each species and the ten-moment limit that evolves the full anisotropic, non-gyrotropic pressure tensor for each species. We first demonstrate analytically andmore » numerically that the five-moment model reduces to the widely used Hall magnetohydrodynamics (Hall MHD) model under the assumptions of vanishing electron inertia, infinite speed of light, and quasi-neutrality. Then, we compare ten-moment and fully kinetic particle-in-cell (PIC) simulations of a large scale Harris sheet reconnection problem, where the ten-moment equations are closed with a local linear collisionless approximation for the heat flux. The ten-moment simulation gives reasonable agreement with the PIC results regarding the structures and magnitudes of the electron flows, the polarities and magnitudes of elements of the electron pressure tensor, and the decomposition of the generalized Ohm's law. Possible ways to improve the simple local closure towards a nonlocal fully three-dimensional closure are also discussed.« less
NASA Astrophysics Data System (ADS)
Jeloaica, L.; Estève, A.; Djafari Rouhani, M.; Estève, D.
2003-07-01
The initial stage of atomic layer deposition of HfO2, ZrO2, and Al2O3 high-k films, i.e., the decomposition of HfCl4, ZrCl4, and Al(CH3)3 precursor molecules on an OH-terminated SiO2 surface, is investigated within density functional theory. The energy barriers are determined using artificial activation of vibrational normal modes. For all precursors, reaction proceeds through the formation of intermediate complexes that have equivalent formation energies (˜-0.45 eV), and results in HCl and CH4 formation with activation energies of 0.88, 0.91, and 1.04 eV for Hf, Zr, and Al based precursors, respectively. The reaction product of Al(CH3)3 decomposition is found to be more stable (by -1.45 eV) than the chemisorbed intermediate complex compared to the endothermic decomposition of HfCl4 and ZrCl4 chemisorbed precursors (0.26 and 0.29 eV, respectively).
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network
NASA Astrophysics Data System (ADS)
Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu
2018-03-01
The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation
NASA Astrophysics Data System (ADS)
Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin
2018-05-01
Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.
Embedding EfS in Teacher Education through a Multi-Level Systems Approach: Lessons from Queensland
ERIC Educational Resources Information Center
Evans, Neus; Ferreira, Jo-Anne; Davis, Julie; Stevenson, Robert B.
2016-01-01
This article reports on the fourth stage of an evolving study to develop a systems model for embedding education for sustainability (EfS) into preservice teacher education. The fourth stage trialled the extension of the model to a comprehensive state-wide systems approach involving representatives from all eight Queensland teacher education…
Kinetic Analysis of the Main Temperature Stage of Fast Pyrolysis
NASA Astrophysics Data System (ADS)
Yang, Xiaoxiao; Zhao, Yuying; Xu, Lanshu; Li, Rui
2017-10-01
Kinetics of the thermal decomposition of eucalyptus chips was evaluated using a high-rate thermogravimetric analyzer (BL-TGA) designed by our research group. The experiments were carried out under non-isothermal condition in order to determine the fast pyrolysis behavior of the main temperature stage (350-540ºC) at heating rates of 60, 120, 180, and 360ºC min-1. The Coats-Redfern integral method and four different reaction mechanism models were adopted to calculate the kinetic parameters including apparent activation energy and pre-exponential factor, and the Flynn-Wall-Ozawa method was employed to testify apparent activation energy. The results showed that estimation value was consistent with the values obtained by linear fitting equations, and the best-fit model for fast pyrolysis was found.
N-heptane decomposition in multi-needle to plate electrical discharge
NASA Astrophysics Data System (ADS)
Pekarek, Stanislav; Pospisil, Milan
2003-10-01
Plasma based technologies are becoming more and more important for destruction of volatile organic compounds in air streams. The most frequent electrical discharges tested for VOC decomposition are corona and dielectric barrier discharge. We proposed [1] multi-hollow needles to plate atmospheric pressure discharge enhanced by the flow of the mixture of air with VOC through the needles. In this case all reactive mixture will pass through the active zone of the discharge. The high-speed gas flow near the exit of the needle will also efficiently cool the electrodes. Hence the higher values of the discharge current can be obtained without the danger of the discharge transition to the spark. The chemical reactions leading to the VOC decomposition can therefore be enhanced [2]. We performed an experimental study of the n-heptane decomposition efficiency on its concentration in air in the input of the discharge. We choose n-heptane, an important part of organic solvents and part of automotive fuels, as a representative of saturated alkanes. We found that with decreasing n-heptane concentration the decomposition efficiency increases. Acknowledgement: This work was supported by the research program No: J04/98:212300016 "Pollution control and monitoring of the Environment" of the Czech Technical University in Prague. References [1] S. Pekárek, V. Køíha, M. Pospíil - J. Physics D, Appl. Physics, 34, 117 (2001). [2] O. Goosens, T. Callebaut, Y. Akishev, C. Leys - IEEE Trans. Plasma Sc. 30, 176 (2002).
Multi-scale clustering of functional data with application to hydraulic gradients in wetlands
Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.
2011-01-01
A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.
Advances in understanding, models and parameterisations of biosphere-atmosphere ammonia exchange
NASA Astrophysics Data System (ADS)
Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.
2013-03-01
Atmospheric ammonia (NH3) dominates global emissions of total reactive nitrogen (Nr), while emissions from agricultural production systems contribute about two thirds of global NH3 emissions; the remaining third emanates from oceans, natural vegetation, humans, wild animals and biomass burning. On land, NH3 emitted from the various sources eventually returns to the biosphere by dry deposition to sink areas, predominantly semi-natural vegetation, and by wet and dry deposition as ammonium (NH4+) to all surfaces. However, the land/atmosphere exchange of gaseous NH3 is in fact bi-directional over unfertilized as well as fertilized ecosystems, with periods and areas of emission and deposition alternating in time (diurnal, seasonal) and space (patchwork landscapes). The exchange is controlled by a range of environmental factors, including meteorology, surface layer turbulence, thermodynamics, air and surface heterogeneous-phase chemistry, canopy geometry, plant development stage, leaf age, organic matter decomposition, soil microbial turnover, and, in agricultural systems, by fertilizer application rate, fertilizer type, soil type, crop type, and agricultural management practices. We review the range of processes controlling NH3 emission and uptake in the different parts of the soil-canopy-atmosphere continuum, with NH3 emission potentials defined at the substrate and leaf levels by different [NH4+] / [H+] ratios (Γ). Surface/atmosphere exchange models for NH3 are necessary to compute the temporal and spatial patterns of emissions and deposition at the soil, plant, field, landscape, regional and global scales, in order to assess the multiple environmental impacts of air-borne and deposited NH3 and NH4+. Models of soil/vegetation/atmosphereem NH3 exchange are reviewed from the substrate and leaf scales to the global scale. They range from simple steady-state, "big leaf" canopy resistance models, to dynamic, multi-layer, multi-process, multi-chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterisation, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.
Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange
NASA Astrophysics Data System (ADS)
Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.
2013-07-01
Atmospheric ammonia (NH3) dominates global emissions of total reactive nitrogen (Nr), while emissions from agricultural production systems contribute about two-thirds of global NH3 emissions; the remaining third emanates from oceans, natural vegetation, humans, wild animals and biomass burning. On land, NH3 emitted from the various sources eventually returns to the biosphere by dry deposition to sink areas, predominantly semi-natural vegetation, and by wet and dry deposition as ammonium (NH4+) to all surfaces. However, the land/atmosphere exchange of gaseous NH3 is in fact bi-directional over unfertilized as well as fertilized ecosystems, with periods and areas of emission and deposition alternating in time (diurnal, seasonal) and space (patchwork landscapes). The exchange is controlled by a range of environmental factors, including meteorology, surface layer turbulence, thermodynamics, air and surface heterogeneous-phase chemistry, canopy geometry, plant development stage, leaf age, organic matter decomposition, soil microbial turnover, and, in agricultural systems, by fertilizer application rate, fertilizer type, soil type, crop type, and agricultural management practices. We review the range of processes controlling NH3 emission and uptake in the different parts of the soil-canopy-atmosphere continuum, with NH3 emission potentials defined at the substrate and leaf levels by different [NH4+] / [H+] ratios (Γ). Surface/atmosphere exchange models for NH3 are necessary to compute the temporal and spatial patterns of emissions and deposition at the soil, plant, field, landscape, regional and global scales, in order to assess the multiple environmental impacts of airborne and deposited NH3 and NH4+. Models of soil/vegetation/atmosphere NH3 exchange are reviewed from the substrate and leaf scales to the global scale. They range from simple steady-state, "big leaf" canopy resistance models, to dynamic, multi-layer, multi-process, multi-chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Wieder, W. R.
2012-12-01
Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.
Kemp, Mark A
2015-11-03
A high power RF device has an electron beam cavity, a modulator, and a circuit for feed-forward energy recovery from a multi-stage depressed collector to the modulator. The electron beam cavity include a cathode, an anode, and the multi-stage depressed collector, and the modulator is configured to provide pulses to the cathode. Voltages of the electrode stages of the multi-stage depressed collector are allowed to float as determined by fixed impedances seen by the electrode stages. The energy recovery circuit includes a storage capacitor that dynamically biases potentials of the electrode stages of the multi-stage depressed collector and provides recovered energy from the electrode stages of the multi-stage depressed collector to the modulator. The circuit may also include a step-down transformer, where the electrode stages of the multi-stage depressed collector are electrically connected to separate taps on the step-down transformer.
FACETS: multi-faceted functional decomposition of protein interaction networks.
Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes
2012-10-15
The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein-protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Supplementary data are available at the Bioinformatics online. Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/~assourav/Facets/
The Effect of Body Mass on Outdoor Adult Human Decomposition.
Roberts, Lindsey G; Spencer, Jessica R; Dabbs, Gretchen R
2017-09-01
Forensic taphonomy explores factors impacting human decomposition. This study investigated the effect of body mass on the rate and pattern of adult human decomposition. Nine males and three females aged 49-95 years ranging in mass from 73 to 159 kg who were donated to the Complex for Forensic Anthropology Research between December 2012 and September 2015 were included in this study. Kelvin accumulated degree days (KADD) were used to assess the thermal energy required for subjects to reach several total body score (TBS) thresholds: early decomposition (TBS ≥6.0), TBS ≥12.5, advanced decomposition (TBS ≥19.0), TBS ≥23.0, and skeletonization (TBS ≥27.0). Results indicate no significant correlation between body mass and KADD at any TBS threshold. Body mass accounted for up to 24.0% of variation in decomposition rate depending on stage, and minor differences in decomposition pattern were observed. Body mass likely has a minimal impact on postmortem interval estimation. © 2017 American Academy of Forensic Sciences.
Inter-Rater Reliability of Total Body Score-A Scale for Quantification of Corpse Decomposition.
Nawrocka, Marta; Frątczak, Katarzyna; Matuszewski, Szymon
2016-05-01
The degree of body decomposition can be quantified using Total Body Score (TBS), a scale frequently used in taphonomic or entomological studies of decomposition. Here, the inter-rater reliability of the scale is analyzed. The study was made on 120 laymen, which were trained in the use of the scale. Participants scored decomposition of pig carcasses from photographs. It was found that the scale, when used by different people, gives homogeneous results irrespective of the user qualifications (the Krippendorff's alfa for all participants was 0.818). The study also indicated that carcasses in advanced decomposition receive significantly less accurate scores. Moreover, it was found that scores for cadavers in mosaic decomposition (i.e., representing signs of at least two stages of decomposition) are less accurate. These results demonstrate that the scale may be regarded as inter-rater reliable. Some propositions for refinement of the scale were also discussed. © 2016 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogh, B.; Chow, J.H.; Javid, H.S.
1983-05-01
A multi-stage formulation of the problem of scheduling generation, load shedding and short term transmission capacity for the alleviation of a viability emergency is presented. The formulation includes generation rate of change constraints, a linear network solution, and a model of the short term thermal overload capacity of transmission lines. The concept of rotating transmission line overloads for emergency state control is developed. The ideas are illustrated by a numerical example.
NASA Astrophysics Data System (ADS)
Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul
2017-11-01
We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.
Analytic model of a multi-electron atom
NASA Astrophysics Data System (ADS)
Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.
2017-12-01
A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.
Artificial neural network cardiopulmonary modeling and diagnosis
Kangas, L.J.; Keller, P.E.
1997-10-28
The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.
Artificial neural network cardiopulmonary modeling and diagnosis
Kangas, Lars J.; Keller, Paul E.
1997-01-01
The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
The importance of structural softening for the evolution and architecture of passive margins
Duretz, T.; Petri, B.; Mohn, G.; Schmalholz, S. M.; Schenker, F. L.; Müntener, O.
2016-01-01
Lithospheric extension can generate passive margins that bound oceans worldwide. Detailed geological and geophysical studies in present and fossil passive margins have highlighted the complexity of their architecture and their multi-stage deformation history. Previous modeling studies have shown the significant impact of coarse mechanical layering of the lithosphere (2 to 4 layer crust and mantle) on passive margin formation. We built upon these studies and design high-resolution (~100–300 m) thermo-mechanical numerical models that incorporate finer mechanical layering (kilometer scale) mimicking tectonically inherited heterogeneities. During lithospheric extension a variety of extensional structures arises naturally due to (1) structural softening caused by necking of mechanically strong layers and (2) the establishment of a network of weak layers across the deforming multi-layered lithosphere. We argue that structural softening in a multi-layered lithosphere is the main cause for the observed multi-stage evolution and architecture of magma-poor passive margins. PMID:27929057
Critical oxide cluster size on Si(111)
NASA Astrophysics Data System (ADS)
Shklyaev, A. A.; Aono, M.; Suzuki, T.
1999-03-01
The initial stage of oxide growth and subsequent oxide decomposition on Si(111)-7×7 at temperatures between 350 and 720°C are studied with the optical second harmonic generation for O 2 pressures ( Pox) between 5×10 -9 and 4×10 -6 Torr. The obtained pressure dependencies of the initial oxide growth rate ( Rgr) and the subsequent oxide decomposition rate are associated with the cluster-forming nature of the oxidation process. For the model of oxide cluster nucleation and growth, a scaling relationship is derived among the critical oxide cluster size, i, and the experimentally measurable values of Rgr and Pox. The critical oxide cluster size, i, thus obtained from the kinetic data increases with temperature. This correlates with an increase of desorption channels and their rates in that the competition between growth and decomposition requires more stable oxide clusters, i.e. clusters with a larger critical size, for oxide to grow at higher temperatures. The increase of i with decreasing Pox is related with a decrease of Rgr: a decreased Rgr requires critical clusters with a longer lifetime, i.e. clusters with a larger size.
NASA Technical Reports Server (NTRS)
Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve
2004-01-01
The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.
Ecotoxicogenomics is research that identifies patterns of gene expression in wildlife and predicts effects of environmental stressors. We are developing a multiple stressor, multiple life stage exposure model using the fathead minnow (Pimephales promelas), initially studying fou...
Bilgic, Abdulbaki; Florkowski, Wojciech J
2007-06-01
This paper identifies factors that influence the demand for a bass fishing trip taken in the southeastern United States using a hurdle negative binomial count data model. The probability of fishing for a bass is estimated in the first stage and the fishing trip frequency is estimated in the second stage for individuals reporting bass fishing trips in the Southeast. The applied approach allows the decomposition of the effects of factors responsible for the decision to take a trip and the trip number. Calculated partial and total elasticities indicate a highly inelastic demand for the number of fishing trips as trip costs increase. However, the demand can be expected to increase if anglers experience a success measured by the number of caught fish or their size. Benefit estimates based on alternative estimation methods differ substantially, suggesting the need for testing each modeling approach applied in empirical studies.
Carbon-nitrogen interactions in idealized simulations with JSBACH (version 3.10)
NASA Astrophysics Data System (ADS)
Goll, Daniel S.; Winkler, Alexander J.; Raddatz, Thomas; Dong, Ning; Prentice, Ian Colin; Ciais, Philippe; Brovkin, Victor
2017-05-01
Recent advances in the representation of soil carbon decomposition and carbon-nitrogen interactions implemented previously into separate versions of the land surface scheme JSBACH are here combined in a single version, which is set to be used in the upcoming 6th phase of coupled model intercomparison project (CMIP6).Here we demonstrate that the new version of JSBACH is able to reproduce the spatial variability in the reactive nitrogen-loss pathways as derived from a compilation of δ15N data (R = 0. 76, root mean square error (RMSE) = 0. 2, Taylor score = 0. 83). The inclusion of carbon-nitrogen interactions leads to a moderate reduction (-10 %) of the carbon-concentration feedback (βL) and has a negligible effect on the sensitivity of the land carbon cycle to warming (γL) compared to the same version of the model without carbon-nitrogen interactions in idealized simulations (1 % increase in atmospheric carbon dioxide per year). In line with evidence from elevated carbon dioxide manipulation experiments, pronounced nitrogen scarcity is alleviated by (1) the accumulation of nitrogen due to enhanced nitrogen inputs by biological nitrogen fixation and reduced losses by leaching and volatilization. Warming stimulated turnover of organic nitrogen further counteracts scarcity.The strengths of the land carbon feedbacks of the recent version of JSBACH, with βL = 0. 61 Pg ppm-1 and γL = -27. 5 Pg °C-1, are 34 and 53 % less than the averages of CMIP5 models, although the CMIP5 version of JSBACH simulated βL and γL, which are 59 and 42 % higher than multi-model average. These changes are primarily due to the new decomposition model, indicating the importance of soil organic matter decomposition for land carbon feedbacks.
Reactive decomposition of low density PMDI foam subject to shock compression
NASA Astrophysics Data System (ADS)
Alexander, Scott; Reinhart, William; Brundage, Aaron; Peterson, David
Low density polymethylene diisocyanate (PMDI) foam with a density of 5.4 pounds per cubic foot (0.087 g/cc) was tested to determine the equation of state properties under shock compression over the pressure range of 0.58 - 3.4 GPa. This pressure range encompasses a region approximately 1.0-1.2 GPa within which the foam undergoes reactive decomposition resulting in significant volume expansion of approximately three times the volume prior to reaction. This volume expansion has a significant effect on the high pressure equation of state. Previous work on similar foam was conducted only up to the region where volume expansion occurs and extrapolation of that data to higher pressure results in a significant error. It is now clear that new models are required to account for the reactive decomposition of this class of foam. The results of plate impact tests will be presented and discussed including details of the unique challenges associated with shock compression of low density foams. Sandia National Labs is a multi-program lab managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Sulfur capture under periodically changing oxidizing and reducing conditions in PFBC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zevenhoven, R.; Yrjas, P.; Hupa, M.
1999-07-01
During in situ sulfur capture with a calcium-based sorbent in fluidized bed combustion (FBC), a temperature optimum is found, at atmospheric pressure, at {approximately}850 C. The repeated decomposition of sulfated limestone during stages where the gas atmosphere surrounding the sorbent particle is not oxidizing but reducing has been identified to explain this maximum. Under pressurized (PFBC) conditions, an additional aspect is the direct conversion of calcium carbonate (CaCO{sub 3}) without the intermediate calcium oxide (CaO) due to the partial pressure of carbon dioxide (CO{sub 2}). In this work it was evaluated how stable calcium sulfate (CaSO{sub 4}) is in amore » gas atmosphere that periodically changes from oxidizing to reducing and vice versa. Atmospheric as well as elevated pressures are considered. CaO or CaCO{sub 3}, and/or calcium sulfide (CaS) are formed during the reducing stage. Using a pressurized thermogravimetric reactor (PTGR) a limestone was periodically sulfated under oxidizing conditions and decomposed under reducing conditions with carbon monoxide (CO), or with CO + H{sub 2} (hydrogen). Experiments at 1 bar and 15 bar were carried out, at temperatures from 850 C to 950 C, at C O and CO + H{sub 2} concentrations up to 4%-vol. The experimental data were modeled using simple first order (parallel) reaction schemes that allowed for sorbent structure changes. This gave rate parameters for the sulfation and the decomposition reactions, and identified the decomposition products. It was found that 1 bar, CO + H{sub 2} gives a higher reduction of CaSO{sub 4} than CO, at the same total concentration. The rate of decomposition increases faster with temperature than the sulfation, explaining the sulfation efficiency maximum mentioned above. At 15 bar, a different picture is seen. The reductive decomposition rate as well as the sulfation rate are slower, with CO as well as CO with small amounts of H{sub 2} as the reducing species. There is a significant effect of the water which is present in the gas at higher concentrations than H{sub 2}. Thermodynamics indicate that this leads to the decomposition of CaS, releasing H{sub 2}S.« less
Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.
2015-01-01
Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202
Qu, Chang-feng; Song, Jin-ming; Li, Ning; Li, Xue-gang; Yuan, Hua-mao; Duan, Li-qin
2016-01-01
Abstract: Jellyfish bloom has been increasing in Chinese seas and decomposition after jellyfish bloom has great influences on marine ecological environment. We conducted the incubation of Nemopilema nomurai decomposing to evaluate its effect on carbon, nitrogen and phosphorus recycling of water column by simulated experiments. The results showed that the processes of jellyfish decomposing represented a fast release of biogenic elements, and the release of carbon, nitrogen and phosphorus reached the maximum at the beginning of jellyfish decomposing. The release of biogenic elements from jellyfish decomposition was dominated by dissolved matter, which had a much higher level than particulate matter. The highest net release rates of dissolved organic carbon and particulate organic carbon reached (103.77 ± 12.60) and (1.52 ± 0.37) mg · kg⁻¹ · h⁻¹, respectively. The dissolved nitrogen was dominated by NH₄⁺-N during the whole incubation time, accounting for 69.6%-91.6% of total dissolved nitrogen, whereas the dissolved phosphorus was dominated by dissolved organic phosphorus during the initial stage of decomposition, being 63.9%-86.7% of total dissolved phosphorus and dominated by PO₄³⁻-P during the late stage of decomposition, being 50.4%-60.2%. On the contrary, the particulate nitrogen was mainly in particulate organic nitrogen, accounting for (88.6 ± 6.9) % of total particulate nitrogen, whereas the particulate phosphorus was mainly in particulate. inorganic phosphorus, accounting for (73.9 ±10.5) % of total particulate phosphorus. In addition, jellyfish decomposition decreased the C/N and increased the N/P of water column. These indicated that jellyfish decomposition could result in relative high carbon and nitrogen loads.
Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization
Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.
2016-01-01
Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. PMID:26589467
Quadratic blind linear unmixing: A graphical user interface for tissue characterization.
Gutierrez-Navarro, O; Campos-Delgado, D U; Arce-Santana, E R; Jo, Javier A
2016-02-01
Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The relationship between two fast/slow analysis techniques for bursting oscillations
Teka, Wondimu; Tabak, Joël; Bertram, Richard
2012-01-01
Bursting oscillations in excitable systems reflect multi-timescale dynamics. These oscillations have often been studied in mathematical models by splitting the equations into fast and slow subsystems. Typically, one treats the slow variables as parameters of the fast subsystem and studies the bifurcation structure of this subsystem. This has key features such as a z-curve (stationary branch) and a Hopf bifurcation that gives rise to a branch of periodic spiking solutions. In models of bursting in pituitary cells, we have recently used a different approach that focuses on the dynamics of the slow subsystem. Characteristic features of this approach are folded node singularities and a critical manifold. In this article, we investigate the relationships between the key structures of the two analysis techniques. We find that the z-curve and Hopf bifurcation of the two-fast/one-slow decomposition are closely related to the voltage nullcline and folded node singularity of the one-fast/two-slow decomposition, respectively. They become identical in the double singular limit in which voltage is infinitely fast and calcium is infinitely slow. PMID:23278052
Development and Evaluation of a Casualty Evacuation Model for a European Conflict.
1987-08-18
W Applications and Computations," lIE Transactions, 16, 2, 127-134 "- ( 1984 ).-,’’ ,., 3. Ali, A. I., Helgason, R. V., Kennington, J. L., and kall ...Part II," Mathematical Programming, 1, 6-25 ( 1971 ). 38. Held, M., Wolfe, P., and Crowder, H., "Validation of Subgradient Optimization", Mathematical...California, Los Angeles, CA, ( 1971 ). Si 66. Swoveland, C., "A Two-Stage Decomposition Algorithm for a Generalized Muticommodity Flow Problem," INFOR
Design, Fabrication, Characterization and Modeling of Integrated Functional Materials
2012-10-01
films. The successful optimization of the PZT thin film growth parameters allowed us to set the stage for doping PZT with rare earth elements...Lanthanum (La)-doped PZT (PLZT) (where Pb2+ is substituted by La3+), has a dramatic enhancement in the piezoelectric properties. There have been only a...few recent reports on the growth of La- PZT films. However, most of the reports were based on chemical routes such as metal organic decomposition
Multi-material decomposition of spectral CT images
NASA Astrophysics Data System (ADS)
Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.
2010-04-01
Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.
Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI.
Bhalerao, Abhir; Westin, Carl-Fredrik
2007-01-01
A mapping of unit vectors onto a 5D hypersphere is used to model and partition ODFs from HARDI data. This mapping has a number of useful and interesting properties and we make a link to interpretation of the second order spherical harmonic decompositions of HARDI data. The paper presents the working theory and experiments of using a von Mises-Fisher mixture model for directional samples. The MLE of the second moment of the HvMF pdf can also be related to fractional anisotropy. We perform error analysis of the estimation scheme in single and multi-fibre regions and then show how a penalised-likelihood model selection method can be employed to differentiate single and multiple fibre regions.
NASA Astrophysics Data System (ADS)
Petrishcheva, E.; Abart, R.
2012-04-01
We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.
White-nose syndrome initiates a cascade of physiologic disturbances in the hibernating bat host
Verant, Michelle L.; Meteyer, Carol U.; Speakman, John R.; Cryan, Paul M.; Lorch, Jeffrey M.; Blehert, David S.
2014-01-01
Integrating these novel findings on the physiological changes that occur in early-stage WNS with those previously documented in late-stage infections, we propose a multi-stage disease progression model that mechanistically describes the pathologic and physiologic effects underlying mortality of WNS in hibernating bats. This model identifies testable hypotheses for better understanding this disease, knowledge that will be critical for defining effective disease mitigation strategies aimed at reducing morbidity and mortality that results from WNS.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-06-01
Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.
Adserias-Garriga, Joe; Hernández, Marta; Quijada, Narciso M; Rodríguez Lázaro, David; Steadman, Dawnie; Garcia-Gil, Jesús
2017-09-01
Understanding human decomposition is critical for its use in postmortem interval (PMI) estimation, having a significant impact on forensic investigations. In recognition of the need to establish the scientific basis for PMI estimation, several studies on decomposition have been carried out in the last years. The aims of the present study were: (i) to identify soil microbiota communities involved in human decomposition through high-throughput sequencing (HTS) of DNA sequences from the different bacteria, (ii) to monitor quantitatively and qualitatively the decay of such signature species, and (iii) to describe succesional changes in bacterial populations from the early putrefaction state until skeletonization. Three donated individuals to the University of Tennessee FAC were studied. Soil samples around the body were taken from the placement of the donor until advanced decay/dry remains stage. Bacterial DNA extracts were obtained from the samples, HTS techniques were applied and bioinformatic data analysis was performed. The three cadavers showed similar overall successional changes. At the beginning of the decomposition process the soil microbiome consisted of diverse indigenous soil bacterial communities. As decomposition advanced, Firmicutes community abundance increased in the soil during the bloat stage. The growth curve of Firmicutes from human remains can be used to estimate time since death during Tennessee summer conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Liqiong; Chen, Hongyan; Zhang, Tonglai; Zhang, Jianguo; Yang, Li
2007-08-17
Three different substituted potassium salts of trinitrophloroglucinol (H(3)TNPG) were prepared and characterized. The salts are all hydrates, and thermogravimetric analysis (TG) and elemental analysis confirmed that these salts contain crystal H2O and that the amount crystal H2O in potassium salts of H3TNPG is 1.0 hydrate for mono-substituted potassium salts of H3TNPG [K(H2TNPG)] and di-substituted potassium salt of H3TNPG [K2(HTNPG)], and 2.0 hydrate for tri-substituted potassium salt of H3TNPG [K3(TNPG)]. Their thermal decomposition mechanisms and kinetic parameters from 50 to 500 degrees C were studied under a linear heating rate by differential scanning calorimetry (DSC). Their thermal decomposition mechanisms undergo dehydration stage and intensive exothermic decomposition stage. FT-IR and TG studies verify that their final residua of decomposition are potassium cyanide or potassium carbonate. According to the onset temperature of the first exothermic decomposition process of dehydrated salts, the order of the thermal stability from low to high is from K(H2TNPG) and K2(HTNPG) to K3(TNPG), which is conform to the results of apparent activation energy calculated by Kissinger's and Ozawa-Doyle's method. Sensitivity test results showed that potassium salts of H3TNPG demonstrated higher sensitivity properties and had greater explosive probabilities.
Multi-variants synthesis of Petri nets for FPGA devices
NASA Astrophysics Data System (ADS)
Bukowiec, Arkadiusz; Doligalski, Michał
2015-09-01
There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.
NASA Astrophysics Data System (ADS)
Bastola, S.; Dialynas, Y. G.; Bras, R. L.; Arnone, E.; Noto, L. V.
2015-12-01
The dynamics of carbon and nitrogen cycles, increasingly influenced by human activities, are the key to the functioning of ecosystems. These cycles are influenced by the composition of the substrate, availability of nitrogen, the population of microorganisms, and by environmental factors. Therefore, land management and use, climate change, and nitrogen deposition patterns influence the dynamics of these macronutrients at the landscape scale. In this work a physically based distributed hydrological model, the tRIBS model, is coupled with a process-based multi-compartment model of the biogeochemical cycle to simulate the dynamics of carbon and nitrogen (CN) in the Mameyes River basin, Puerto Rico. The model includes a wide range of processes that influence the movement, production, alteration of nutrients in the landscape and factors that affect the CN cycling. The tRIBS integrates geomorphological and climatic factors that influence the cycling of CN in soil. Implementing the decomposition module into tRIBS makes the model a powerful complement to a biogeochemical observation system and a forecast tool able to analyze the influences of future changes on ecosystem services. The soil hydrologic parameters of the model were obtained using ranges of published parameters and observed streamflow data at the outlet. The parameters of the decomposition module are based on previously published data from studies conducted in the Luquillio CZO (budgets of soil organic matter and CN ratio for each of the dominant vegetation types across the landscape). Hydrological fluxes, wet depositon of nitrogen, litter fall and its corresponding CN ratio drive the decomposition model. The simulation results demonstrate a strong influence of soil moisture dynamics on the spatiotemporal distribution of nutrients at the landscape level. The carbon in the litter pool and the nitrate and ammonia pool respond quickly to soil moisture content. Moreover, the CN ratios of the plant litter have significant influence in the dynamics of CN cycling.
Kim, Hyun Young; Seo, Jiyoung; Kim, Tae-Hun; Shim, Bomi; Cha, Seok Mun; Yu, Seungho
2017-06-01
This study examined the use of microbial community structure as a bio-indicator of decomposition levels. High-throughput pyrosequencing technology was used to assess the shift in microbial community of leachate from animal carcass lysimeter. The leachate samples were collected monthly for one year and a total of 164,639 pyrosequencing reads were obtained and used in the taxonomic classification and operational taxonomy units (OTUs) distribution analysis based on sequence similarity. Our results show considerable changes in the phylum-level bacterial composition, suggesting that the microbial community is a sensitive parameter affected by the burial environment. The phylum classification results showed that Proteobacteria (Pseudomonas) were the most influential taxa in earlier decomposition stage whereas Firmicutes (Clostridium, Sporanaerobacter, and Peptostreptococcus) were dominant in later stage under anaerobic conditions. The result of this study can provide useful information on a time series of leachate profiles of microbial community structures and suggest patterns of microbial diversity in livestock burial sites. In addition, this result can be applicable to predict the decomposition stages under clay loam based soil conditions of animal livestock. Copyright © 2017 Elsevier B.V. All rights reserved.
Theoretical Studies of Chemical Reactions following Electronic Excitation
NASA Technical Reports Server (NTRS)
Chaban, Galina M.
2003-01-01
The use of multi-configurational wave functions is demonstrated for several processes: tautomerization reactions in the ground and excited states of the DNA base adenine, dissociation of glycine molecule after electronic excitation, and decomposition/deformation of novel rare gas molecules HRgF. These processes involve bond brealung/formation and require multi-configurational approaches that include dynamic correlation.
Guo, Bin; Chen, Zhongsheng; Guo, Jinyun; Liu, Feng; Chen, Chuanfa; Liu, Kangli
2016-01-01
Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960–2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD). Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years) and inter-decadal scale (quasi-12 and quasi-23 years). Moreover, the 2–3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements. PMID:27007388
FAST TRACK COMMUNICATION: A closer look at arrested spinodal decomposition in protein solutions
NASA Astrophysics Data System (ADS)
Gibaud, Thomas; Schurtenberger, Peter
2009-08-01
Concentrated aqueous solutions of the protein lysozyme undergo a liquid-solid transition upon a temperature quench into the unstable spinodal region below a characteristic arrest temperature of Tf = 15 °C. We use video microscopy and ultra-small angle light scattering in order to investigate the arrested structures as a function of initial concentration, quench temperature and rate of the temperature quench. We find that the solid-like samples show all the features of a bicontinuous network that is formed through an arrested spinodal decomposition process. We determine the correlation length ξ and demonstrate that ξ exhibits a temperature dependence that closely follows the critical scaling expected for density fluctuations during the early stages of spinodal decomposition. These findings are in agreement with an arrest scenario based on a state diagram where the arrest or gel line extends far into the unstable region below the spinodal line. Arrest then occurs when during the early stage of spinodal decomposition the volume fraction phi2 of the dense phase intersects the dynamical arrest threshold phi2,Glass, upon which phase separation gets pinned into a space-spanning gel network with a characteristic length ξ.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
Goli, Srinivas; Doshi, Riddhi; Perianayagam, Arokiasamy
2013-01-01
Children and women comprise vulnerable populations in terms of health and are gravely affected by the impact of economic inequalities through multi-dimensional channels. Urban areas are believed to have better socioeconomic and maternal and child health indicators than rural areas. This perception leads to the implementation of health policies ignorant of intra-urban health inequalities. Therefore, the objective of this study is to explain the pathways of economic inequalities in maternal and child health indicators among the urban population of India. Using data from the third wave of the National Family Health Survey (NFHS, 2005-06), this study calculated relative contribution of socioeconomic factors to inequalities in key maternal and child health indicators such as antenatal check-ups (ANCs), institutional deliveries, proportion of children with complete immunization, proportion of underweight children, and Infant Mortality Rate (IMR). Along with regular CI estimates, this study applied widely used regression-based Inequality Decomposition model proposed by Wagstaff and colleagues. The CI estimates show considerable economic inequalities in women with less than 3 ANCs (CI = -0.3501), institutional delivery (CI = -0.3214), children without fully immunization (CI = -0.18340), underweight children (CI = -0.19420), and infant deaths (CI = -0.15596). Results of the decomposition model reveal that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical factors contributing to economic inequalities in maternal and child health indicators. The residuals in all the decomposition models are very less; this implies that the above mentioned factors explained maximum inequalities in maternal and child health of urban population in India. Findings suggest that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical pathways through which economic factors operate on inequalities in maternal and child health outcomes in urban India.
Goli, Srinivas; Doshi, Riddhi; Perianayagam, Arokiasamy
2013-01-01
Background/Objective Children and women comprise vulnerable populations in terms of health and are gravely affected by the impact of economic inequalities through multi-dimensional channels. Urban areas are believed to have better socioeconomic and maternal and child health indicators than rural areas. This perception leads to the implementation of health policies ignorant of intra-urban health inequalities. Therefore, the objective of this study is to explain the pathways of economic inequalities in maternal and child health indicators among the urban population of India. Methods Using data from the third wave of the National Family Health Survey (NFHS, 2005–06), this study calculated relative contribution of socioeconomic factors to inequalities in key maternal and child health indicators such as antenatal check-ups (ANCs), institutional deliveries, proportion of children with complete immunization, proportion of underweight children, and Infant Mortality Rate (IMR). Along with regular CI estimates, this study applied widely used regression-based Inequality Decomposition model proposed by Wagstaff and colleagues. Results The CI estimates show considerable economic inequalities in women with less than 3 ANCs (CI = −0.3501), institutional delivery (CI = −0.3214), children without fully immunization (CI = −0.18340), underweight children (CI = −0.19420), and infant deaths (CI = −0.15596). Results of the decomposition model reveal that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical factors contributing to economic inequalities in maternal and child health indicators. The residuals in all the decomposition models are very less; this implies that the above mentioned factors explained maximum inequalities in maternal and child health of urban population in India. Conclusion Findings suggest that illiteracy among women and her partner, poor economic status, and mass media exposure are the critical pathways through which economic factors operate on inequalities in maternal and child health outcomes in urban India. PMID:23555587
Multi-loop Integrand Reduction with Computational Algebraic Geometry
NASA Astrophysics Data System (ADS)
Badger, Simon; Frellesvig, Hjalte; Zhang, Yang
2014-06-01
We discuss recent progress in multi-loop integrand reduction methods. Motivated by the possibility of an automated construction of multi-loop amplitudes via generalized unitarity cuts we describe a procedure to obtain a general parameterisation of any multi-loop integrand in a renormalizable gauge theory. The method relies on computational algebraic geometry techniques such as Gröbner bases and primary decomposition of ideals. We present some results for two and three loop amplitudes obtained with the help of the MACAULAY2 computer algebra system and the Mathematica package BASISDET.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Further insights into the kinetics of thermal decomposition during continuous cooling.
Liavitskaya, Tatsiana; Guigo, Nathanaël; Sbirrazzuoli, Nicolas; Vyazovkin, Sergey
2017-07-26
Following the previous work (Phys. Chem. Chem. Phys., 2016, 18, 32021), this study continues to investigate the intriguing phenomenon of thermal decomposition during continuous cooling. The phenomenon can be detected and its kinetics can be measured by means of thermogravimetric analysis (TGA). The kinetics of the thermal decomposition of ammonium nitrate (NH 4 NO 3 ), nickel oxalate (NiC 2 O 4 ), and lithium sulfate monohydrate (Li 2 SO 4 ·H 2 O) have been measured upon heating and cooling and analyzed by means of the isoconversional methodology. The results have confirmed the hypothesis that the respective kinetics should be similar for single-step processes (NH 4 NO 3 decomposition) but different for multi-step ones (NiC 2 O 4 decomposition and Li 2 SO 4 ·H 2 O dehydration). It has been discovered that the differences in the kinetics can be either quantitative or qualitative. Physical insights into the nature of the differences have been proposed.
NASA Astrophysics Data System (ADS)
Asaumi, Hiroyoshi; Fujimoto, Hiroshi
Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.
Silicon Nitride Equation of State
NASA Astrophysics Data System (ADS)
Swaminathan, Pazhayannur; Brown, Robert
2015-06-01
This report presents the development a global, multi-phase equation of state (EOS) for the ceramic silicon nitride (Si3N4) . Structural forms include amorphous silicon nitride normally used as a thin film and three crystalline polymorphs. Crystalline phases include hexagonal α-Si3N4, hexagonalβ-Si3N4, and the cubic spinel c-Si3N4. Decomposition at about 1900 °C results in a liquid silicon phase and gas phase products such as molecular nitrogen, atomic nitrogen, and atomic silicon. The silicon nitride EOS was developed using EOSPro which is a new and extended version of the PANDA II code. Both codes are valuable tools and have been used successfully for a variety of material classes. Both PANDA II and EOSPro can generate a tabular EOS that can be used in conjunction with hydrocodes. The paper describes the development efforts for the component solid phases and presents results obtained using the EOSPro phase transition model to investigate the solid-solid phase transitions in relation to the available shock data. Furthermore, the EOSPro mixture model is used to develop a model for the decomposition products and then combined with the single component solid models to study the global phase diagram. Sponsored by the NASA Goddard Space Flight Center Living With a Star program office.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
Acidic attack of perfluorinated alkyl ether lubricant molecules by metal oxide surfaces
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Faut, Owen D.
1990-01-01
The reactions of linear perfluoropolyalkylether (PFAE) lubricants with alpha-Fe203 and Fe203-based solid superacids were studied. The reaction with alpha-Fe203 proceeds in two stages. The first stage is an initial slow catalytic decomposition of the fluid. This reaction releases reactive gaseous products which attach the metal oxide and convert it to FeF3. The second stage is a more rapid decomposition of the fluid, effected by the surface FeF3. A study of the initial breakdown step was performed using alpha-Fe203, alpha-Fe203 preconverted to FeF3, and sulfate-promoted alpha-Fe203 superacids. The results indicate that the breakdown reaction involves acidic attack at fluorine atoms on acetal carbons in the linear PFAE. Possible approaches to combat the problem are outlined.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Duan, Zhipeng; Huang, Jing
2018-06-01
With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.
Hidden discriminative features extraction for supervised high-order time series modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2016-11-01
In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Incorporating DSA in multipatterning semiconductor manufacturing technologies
NASA Astrophysics Data System (ADS)
Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet
2015-03-01
Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.
Mixing effects on litter decomposition rates in a young tree diversity experiment
NASA Astrophysics Data System (ADS)
Setiawan, Nuri Nurlaila; Vanhellemont, Margot; De Schrijver, An; Schelfhout, Stephanie; Baeten, Lander; Verheyen, Kris
2016-01-01
Litter decomposition is an essential process for biogeochemical cycling and for the formation of new soil organic matter. Mixing litter from different tree species has been reported to increase litter decomposition rates through synergistic effects. We assessed the decomposition rates of leaf litter from five tree species in a recently established tree diversity experiment on a post-agriculture site in Belgium. We used 20 different leaf litter compositions with diversity levels ranging from 1 up to 4 species. Litter mass loss in litterbags was assessed 10, 20, 25, 35, and 60 weeks after installation in the field. We found that litter decomposition rates were higher for high-quality litters, i.e., with high nitrogen content and low lignin content. The decomposition rates of mixed litter were more affected by the identity of the litter species within the mixture than by the diversity of the litter per se, but the variability in litter decomposition rates decreased as the litter diversity increased. Among the 15 different mixed litter compositions in our study, only three litter combinations showed synergistic effects. Our study suggests that admixing tree species with high-quality litter in post-agricultural plantations helps in increasing the mixture's early-stage litter decomposition rate.
NASA Astrophysics Data System (ADS)
Steiner, S. M.; Wood, J. H.
2015-12-01
As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Multi-stage mixing in subduction zone: Application to Merapi volcano, Indonesia
NASA Astrophysics Data System (ADS)
Debaille, V.; Doucelance, R.; Weis, D.; Schiano, P.
2003-04-01
Basalts sampling subduction zone volcanism (IAB) often show binary mixing relationship in classical Sr-Nd, Pb-Pb, Sr-Pb isotopic diagrams, generally interpreted as reflecting the involvement of two components in their source. However, several authors have highlighted the presence of minimum three components in such a geodynamical context: mantle wedge, subducted and altered oceanic crust and subducted sediments. The overlying continental crust can also contribute by contamination and assimilation in magma chambers and/or during magma ascent. Here we present a multi-stage model to obtain a two end-member mixing from three components (mantle wedge, altered oceanic crust and sediments). The first stage of the model considers the metasomatism of the mantle wedge by fluids and/or melts released by subducted materials (altered oceanic crust and associated sediments), considering mobility and partition coefficient of trace elements in hydrated fluids and silicate melts. This results in the generation of two distinct end-members, reducing the number of components (mantle wedge, oceanic crust, sediments) from three to two. The second stage of the model concerns the binary mixing of the two end-members thus defined: mantle wedge metasomatized by slab-derived fluids and mantle wedge metasomatized by sediment-derived fluids. This model has been applied on a new isotopic data set (Sr, Nd and Pb, analyzed by TIMS and MC-ICP-MS) of Merapi volcano (Java island, Indonesia). Previous studies have suggested three distinct components in the source of indonesian lavas: mantle wedge, subducted sediments and altered oceanic crust. Moreover, it has been shown that crustal contamination does not significantly affect isotopic ratios of lavas. The multi-stage model proposed here is able to reproduce the binary mixing observed in lavas of Merapi, and a set of numerical values of bulk partition coefficient is given that accounts for the genesis of lavas.
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Self-consistent asset pricing models
NASA Astrophysics Data System (ADS)
Malevergne, Y.; Sornette, D.
2007-08-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.
Decomposition of algal lipids in clay-enriched marine sediment under oxic and anoxic conditions
NASA Astrophysics Data System (ADS)
Lü, Dongwei; Song, Qian; Wang, Xuchen
2010-01-01
A series of laboratory incubation experiments were conducted to examine the decomposition of algal organic matter in clay-enriched marine sediment under oxic and anoxic conditions. During the 245-day incubation period, changes in the concentrations of TOC, major algal fatty acid components (14:0, 16:0, 16:1, 18:1 and 20:5), and n-alkanes (C16-C23) were quantified in the samples. Our results indicate that the organic matters were degraded more rapidly in oxic than anoxic conditions. Adsorption of fatty acids onto clay minerals was a rapid and reversible process. Using a simple G model, we calculated the decomposition rate constants for TOC, n-alkanes and fatty acids which ranged from 0.017-0.024 d-1, 0.049-0.103 d-1 and 0.011 to 0.069 d-1, respectively. Algal organic matter degraded in two stages characterized by a fast and a slow degradation processes. The addition of clay minerals montmorillonite and kaolinite to the sediments showed significant influence affecting the decomposition processes of algal TOC and fatty acids by adsorption and incorporation of the compounds with clay particles. Adsorption/association of fatty acids by clay minerals was rapid but appeared to be a slow reversible process. In addition to the sediment redox and clay influence, the structure of the compounds also played important roles in affecting their degradation dynamic in sediments.
Miyake, Yuichi; Tokumura, Masahiro; Wang, Qi; Amagai, Takashi; Horii, Yuichi
2017-11-01
Here, we examined the incineration of extruded polystyrene containing hexabromocyclododecane (HBCD) in a pilot-scale incinerator under various combustion temperatures (800-950°C) and flue gas residence times (2-8sec). Rates of HBCD decomposition ranged from 99.996% (800°C, 2sec) to 99.9999% (950°C, 8sec); the decomposition of HBCD, except during the initial stage of combustion (flue gas residence time<2sec), followed a pseudo-first-order kinetics model. An Arrhenius plot revealed that the activation energy and frequency factor of the decomposition of HBCD by combustion were 14.2kJ/mol and 1.69sec -1 , respectively. During combustion, 11 brominated polycyclic aromatic hydrocarbons (BrPAHs) were detected as unintentional by-products. Of the 11 BrPAHs detected, 2-bromoanthracene and 1-bromopyrene were detected at the highest concentrations. The mutagenic and carcinogenic BrPAHs 1,5-dibromoanthracene and 1-bromopyrene were most frequently detected in the flue gases analyzed. The total concentration of BrPAHs exponentially increased (range, 87.8-2,040,000ng/m 3 ) with increasing flue gas residence time. Results from a qualitative analysis using gas chromatography/high-resolution mass spectrometry suggest that bromofluorene and bromopyrene (or fluoranthene) congeners were also produced during the combustion. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mleczko, M.
2014-12-01
Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities
Multi-Group Maximum Entropy Model for Translational Non-Equilibrium
NASA Technical Reports Server (NTRS)
Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco
2017-01-01
The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Sakurai, G.; Ono, K.; Mano, M.; Miyata, A.
2011-12-01
Agricultural activities, cultivating crops, managing soil, harvesting and post-harvest treatments, are not only affected from the surrounding environment but also change the environment reversely. The changes in environment, temperature, radiation and precipitation, brings changes in crop productivity. On the other hand, the status of crops, i.e. the growth and phenological stage, change the exchange of energy, H2O and CO2 between crop vegetation surface and atmosphere. Conducting the stable agricultural harvests, reducing the Greenhouse Effect Gas (GHG) emission and enhancing carbon sequestration in soil are preferable as a win-win activity. We conducted model-data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter. The particle filter, which is one of Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this paper, we applied the model-data fusion analysis to the two datasets on paddy rice field sites in Japan: only a single rice cultivation, and a single rice and wheat cultivation. We focused on the parameters related to crop production as well as soil carbon storage. As a result, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years (Fig.1). The temperature sensitivity, Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition).
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
NASA Astrophysics Data System (ADS)
Filipchuk, D. V.; Litvinov, A. V.; Etrekova, M. O.; Nozdrya, D. A.
2017-12-01
Sensitivity of the MIS-sensor to products of thermal decomposition of insulation and jacket of the most common types of cables is investigated. It is shown that hydrogen is evolved under heating the insulation to temperatures not exceeding 250 °C. Registration of the evolved hydrogen by the MIS-sensor can be used for detection of fires at an early stage.
Kojima, A; Hanada, M; Tobari, H; Nishikiori, R; Hiratsuka, J; Kashiwagi, M; Umeda, N; Yoshida, M; Ichikawa, M; Watanabe, K; Yamano, Y; Grisham, L R
2016-02-01
Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltage holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kojima, A., E-mail: kojima.atsushi@jaea.go.jp; Hanada, M.; Tobari, H.
Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltagemore » holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.« less
A Longitudinal Study on Human Outdoor Decomposition in Central Texas.
Suckling, Joanna K; Spradley, M Katherine; Godde, Kanya
2016-01-01
The development of a methodology that estimates the postmortem interval (PMI) from stages of decomposition is a goal for which forensic practitioners strive. A proposed equation (Megyesi et al. 2005) that utilizes total body score (TBS) and accumulated degree days (ADD) was tested using longitudinal data collected from human remains donated to the Forensic Anthropology Research Facility (FARF) at Texas State University-San Marcos. Exact binomial tests examined the rate of the equation to successfully predict ADD. Statistically significant differences were found between ADD estimated by the equation and the observed value for decomposition stage. Differences remained significant after carnivore scavenged donations were removed from analysis. Low success rates for the equation to predict ADD from TBS and the wide standard errors demonstrate the need to re-evaluate the use of this equation and methodology for PMI estimation in different environments; rather, multivariate methods and equations should be derived that are environmentally specific. © 2015 American Academy of Forensic Sciences.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Computational Plume Modeling of COnceptual ARES Vehicle Stage Tests
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Ahuja, Vineet
2007-01-01
The plume-induced environment of a conceptual ARES V vehicle stage test at the NASA Stennis Space Center (NASA-SSC) was modeled using computational fluid dynamics (CFD). A full-scale multi-element grid was generated for the NASA-SSC B-2 test stand with the ARES V stage being located in a proposed off-center forward position. The plume produced by the ARES V main power plant (cluster of five RS-68 LOX/LH2 engines) was simulated using a multi-element flow solver - CRUNCH. The primary objective of this work was to obtain a fundamental understanding of the ARES V plume and its impingement characteristics on the B-2 flame-deflector. The location, size and shape of the impingement region were quantified along with the un-cooled deflector wall pressures, temperatures and incident heating rates. Issues with the proposed tests were identified and several of these addressed using the CFD methodology. The final results of this modeling effort will provide useful data and boundary conditions in upcoming engineering studies that are directed towards determining the required facility modifications for ensuring safe and reliable stage testing in support of the Constellation Program.
Yang, Ying; Wang, Yunlong; Zhu, Manzhou; Chen, Yan; Xiao, Yazhong; Shen, Yuhua; Xie, Anjian
2017-05-02
A reduced graphene oxide (RGO)/gold nanorod (AuNR)/hydroxyapatite (HA) nanocomposite was designed and successfully synthesized for the first time. An anticancer drug, 5-fluorouracil (5FU), was chosen as a model drug to be loaded in RGO/AuNR/HA. The fabricated RGO/AuNR/HA-5FU showed robust, selective targeting and penetrating efficiency against HeLa cells due to the good compatibility and nontoxicity of HA, and showed excellent synergetic antitumor effects through combined chemotherapy (CT) by 5FU and photothermal therapy (PTT) by both RGO and AuNRs under near-infrared (NIR) laser irradiation. More importantly, this synergistic dual therapy based on RGO/AuNR/HA can also minimize side effects in normal cells and exhibits greater antitumor activity because of a multi-stage drug release ability triggered by the pH sensitivity of HA in the first stage and the combined photothermal conversion capabilities of RGO and AuNRs by means of the NIR laser irradiation in the second stage. This study suggests that the novel RGO/AuNR/HA multi-stage drug delivery system may represent a promising potential application of multifunctional composite materials in the biomedical field.
Are litter decomposition and fire linked through plant species traits?
Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien
2017-11-01
Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Modeling Humans as Reinforcement Learners: How to Predict Human Behavior in Multi-Stage Games
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Wolpert, David H.; Backhaus, Scott; Bent, Russell; Bono, James; Tracey, Brendan
2011-01-01
This paper introduces a novel framework for modeling interacting humans in a multi-stage game environment by combining concepts from game theory and reinforcement learning. The proposed model has the following desirable characteristics: (1) Bounded rational players, (2) strategic (i.e., players account for one anothers reward functions), and (3) is computationally feasible even on moderately large real-world systems. To do this we extend level-K reasoning to policy space to, for the first time, be able to handle multiple time steps. This allows us to decompose the problem into a series of smaller ones where we can apply standard reinforcement learning algorithms. We investigate these ideas in a cyber-battle scenario over a smart power grid and discuss the relationship between the behavior predicted by our model and what one might expect of real human defenders and attackers.
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
NASA Astrophysics Data System (ADS)
Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio
2017-08-01
This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.
NASA Astrophysics Data System (ADS)
Ernakovich, J. G.; Baldock, J.; Carter, T.; Davis, R. A.; Kalbitz, K.; Sanderman, J.; Farrell, M.
2017-12-01
Microbial degradation of plant detritus is now accepted as a major stabilizing process of organic matter in soils. Most of our understanding of the dynamics of decomposition come from laboratory litter decay studies in the absence of plants, despite the fact that litter decays in the presence of plants in many native and managed systems. There is growing evidence that living plants significantly impact the degradation and stabilization of litter carbon (C) due to changes in the chemical and physical nature of soils in the rhizosphere. For example, mechanistic studies have observed stimulatory effects of root exudates on litter decomposition, and greenhouse studies have shown that living plants accelerate detrital decay. Despite this, we lack a quantitative understanding of the contribution of living plants to litter decomposition and how interactions of these two sources of C build soil organic matter (SOM). We used a novel triple-isotope approach to determine the effect of living plants on litter decomposition and C cycling. In the first stage of the experiment, we grew a temperate grass commonly used for forage, Poa labillardieri, in a continuously-labelled atmosphere of 14CO2 fertilized with K15NO3, such that the grass biomass was uniformly labelled with 14C and 15N. In the second stage, we constructed litter decomposition mescososms with and without a living plant to test for the effect of a growing plant on litter decomposition. The 14C/15N litter was decomposed in a sandy clay loam while a temperate forage grass, Lolium perenne, grew in an atmosphere of enriched 13CO2. The fate of the litter-14C/15N and plant-13C was traced into soil mineral fractions and dissolved organic matter (DOM) over the course of nine weeks using four destructive harvests of the mesocosms. Our preliminary results suggest that living plants play a major role in the degradation of plant litter, as litter decomposition was greater, both in rate and absolute amount, for soil mesocosms with a growing plant. Our observations during the decomposition experiment suggests that plant roots physically disrupted litter to increase decomposition. Isotopic analyses are currently underway, and transformations of litter-14C will be presented. Refining our understanding of in situ litter decay will add to our growing knowledge of the C cycle.
Dynamics of multiple elements in fast decomposing vegetable residues.
Cao, Chun; Liu, Si-Qi; Ma, Zhen-Bang; Lin, Yun; Su, Qiong; Chen, Huan; Wang, Jun-Jian
2018-03-01
Litter decomposition regulates the cycling of nutrients and toxicants but is poorly studied in farmlands. To understand the unavoidable in-situ decomposition process, we quantified the dynamics of C, H, N, As, Ca, Cd, Cr, Cu, Fe, Hg, K, Mg, Mn, Na, Ni, Pb, and Zn during a 180-d decomposition study in leafy lettuce (Lactuca sativa var. longifoliaf) and rape (Brassica chinensis) residues in a wastewater-irrigated farmland in northwestern China. Different from most studied natural ecosystems, the managed vegetable farmland had a much faster litter decomposition rate (half-life of 18-60d), and interestingly, faster decomposition of roots relative to leaves for both the vegetables. Faster root decomposition can be explained by the initial biochemical composition (more O-alkyl C and less alkyl and aromatic C) but not the C/N stoichiometry. Multi-element dynamics varied greatly, with C, H, N, K, and Na being highly released (remaining proportion<20%), Ca, Cd, Cr, Mg, Ni, and Zn released, and As, Cu, Fe, Hg, Mn, and Pb possibly accumulated. Although vegetable residues serve as temporary sinks of some metal(loid)s, their fast decomposition, particularly for the O-alkyl-C-rich leafy-lettuce roots, suggest that toxic metal(loid)s can be released from residues, which therefore become secondary pollution sources. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Change Detection of Remote Sensing Images by Dt-Cwt and Mrf
NASA Astrophysics Data System (ADS)
Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.
2017-05-01
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
Approximate analytical modeling of leptospirosis infection
NASA Astrophysics Data System (ADS)
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan
2017-03-01
Automatic sleep staging is essential for alleviating the burden of the physicians of analyzing a large volume of data by visual inspection. It is also a precondition for making an automated sleep monitoring system feasible. Further, computerized sleep scoring will expedite large-scale data analysis in sleep research. Nevertheless, most of the existing works on sleep staging are either multichannel or multiple physiological signal based which are uncomfortable for the user and hinder the feasibility of an in-home sleep monitoring device. So, a successful and reliable computer-assisted sleep staging scheme is yet to emerge. In this work, we propose a single channel EEG based algorithm for computerized sleep scoring. In the proposed algorithm, we decompose EEG signal segments using Ensemble Empirical Mode Decomposition (EEMD) and extract various statistical moment based features. The effectiveness of EEMD and statistical features are investigated. Statistical analysis is performed for feature selection. A newly proposed classification technique, namely - Random under sampling boosting (RUSBoost) is introduced for sleep stage classification. This is the first implementation of EEMD in conjunction with RUSBoost to the best of the authors' knowledge. The proposed feature extraction scheme's performance is investigated for various choices of classification models. The algorithmic performance of our scheme is evaluated against contemporary works in the literature. The performance of the proposed method is comparable or better than that of the state-of-the-art ones. The proposed algorithm gives 88.07%, 83.49%, 92.66%, 94.23%, and 98.15% for 6-state to 2-state classification of sleep stages on Sleep-EDF database. Our experimental outcomes reveal that RUSBoost outperforms other classification models for the feature extraction framework presented in this work. Besides, the algorithm proposed in this work demonstrates high detection accuracy for the sleep states S1 and REM. Statistical moment based features in the EEMD domain distinguish the sleep states successfully and efficaciously. The automated sleep scoring scheme propounded herein can eradicate the onus of the clinicians, contribute to the device implementation of a sleep monitoring system, and benefit sleep research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Acidic attack of perfluorinated alkyl ether lubricant molecules by metal oxide surfaces
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Faut, Owen D.
1989-01-01
The reactions of linear perfluoropolyalkylether (PFAE) lubricants with alpha-Fe2O3 and Fe2O3-based solid superacids were studied. The reaction with alpha-Fe2O3 proceeds in two stages. The first stage is an initial slow catalytic decomposition of the fluid. This reaction releases reactive gaseous products which attach the metal oxide and convert it to FeF3. The second stage is a more rapid decomposition of the fluid, effected by the surface FeF3. A study of the inital breakdown step was performed using alpha-Fe2O3, alpha-Fe2O3 preconverted to FeF3, and sulfate-promoted alpha-Fe2O3 superacids. The results indicate that the breakdown reaction involves acidic attack at fluorine atoms on acetal carbons in the linear PFAE. Possible approaches to combat the problem are outlined.
A novel partitioning method for block-structured adaptive meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less
A novel partitioning method for block-structured adaptive meshes
NASA Astrophysics Data System (ADS)
Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-07-01
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
Microbial community assembly and metabolic function during mammalian corpse decomposition
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-01
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.
The impact of shallow burial on differential decomposition to the body: a temperate case study.
Schotsmans, Eline M J; Van de Voorde, Wim; De Winne, Joan; Wilson, Andrew S
2011-03-20
Extant literature contains a number of specific case studies on differential decomposition involving adipocere formation or desiccation, but few describe the co-occurrence of these features within a temperate climate. The case of a 65-year-old male, partially buried in a shallow grave for 7 months, is presented in which the soft tissues of the body were outwardly well preserved. The right leg was desiccated, some parts of the body were covered with adipocere (head, neck, right shoulder, upper torso and left leg) and other parts could be classified as in the early stages of decomposition. In this study the taphonomic variables resulting in differential decomposition with desiccation and adipocere formation are discussed. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Microbial community assembly and metabolic function during mammalian corpse decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalf, J. L.; Xu, Z. Z.; Weiss, S.
2015-12-10
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less
NASA Astrophysics Data System (ADS)
Hebert, Philippe; Saint-Amans, Charles
2013-06-01
A detailed description of the reaction rates and mechanisms occurring in shock-induced decomposition of condensed explosives is very important to improve the predictive capabilities of shock-to-detonation transition models. However, direct measurements of such experimental data are difficult to perform during detonation experiments. By coupling pulsed laser ignition of an explosive in a diamond anvil cell (DAC) with time-resolved streak camera recording of transmitted light, it is possible to make direct observations of deflagration phenomena at detonation pressure. We have developed an experimental set-up that allows combustion front propagation rates and time-resolved absorption spectroscopy measurements. The decomposition reactions are initiated using a nanosecond YAG laser and their kinetics is followed by time-resolved absorption spectroscopy. The results obtained for two explosives, nitromethane (NM) and HMX are presented in this paper. For NM, a change in reactivity is clearly seen around 25 GPa. Below this pressure, the reaction products are essentially carbon residues whereas at higher pressure, a transient absorption feature is first observed and is followed by the formation of a white amorphous product. For HMX, the evolution of the absorption as a function of time indicates a multi-step reaction mechanism which is found to depend on both the initial pressure and the laser fluence.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Dynamics of the oral microbiota as a tool to estimate time since death.
Adserias-Garriga, J; Quijada, N M; Hernandez, M; Rodríguez Lázaro, D; Steadman, D; Garcia-Gil, L J
2017-06-27
The oral cavity harbors one of the most diverse microbiomes in the human body. It has been shown to be the second most complex in the body after the gastrointestinal tract. Upon death, the indigenous microorganisms lead to the decomposition of the carcass. Therefore, the oral cavity and gastrointestinal tract microbiomes play a key role in human decomposition. The aim of the present study is to monitor the microbiome of decaying bodies on a daily basis and to identify signature bacterial taxa, that can improve postmortem interval estimation. Three individuals (one male and two female) donated to the University of Tennessee Forensic Anthropology Center for the W.M. Bass Donated Skeletal Collection were studied. Oral swab samples were taken daily throughout the different stages of cadaveric putrefaction. DNA was extracted and analyzed by next-generation sequencing techniques. The three cadavers showed similar overall successional changes during the decomposition process. Firmicutes and Actinobacteria are the predominant phyla in the fresh stage. The presence of Tenericutes corresponds to bloat stage. Firmicutes is the predominant phylum in advanced decay, but the Firmicutes community is a different one from the predominant Firmicutes of the fresh stage. This study depicts the thanatomicrobiome successional changes in the oral cavity, and highlights its potential use in forensic cases as a quantitative and objective approach to estimate postmortem interval, from an ecological rationale. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C
2017-11-01
With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.
Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm
Sun, Baoliang; Jiang, Chunlan; Li, Ming
2016-01-01
An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271
Felo, Michael; Christensen, Brandon; Higgins, John
2013-01-01
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.
Stoyanova, Alexandrina; Teale, Andrew M; Toulouse, Julien; Helgaker, Trygve; Fromager, Emmanuel
2013-10-07
The alternative separation of exchange and correlation energies proposed by Toulouse et al. [Theor. Chem. Acc. 114, 305 (2005)] is explored in the context of multi-configuration range-separated density-functional theory. The new decomposition of the short-range exchange-correlation energy relies on the auxiliary long-range interacting wavefunction rather than the Kohn-Sham (KS) determinant. The advantage, relative to the traditional KS decomposition, is that the wavefunction part of the energy is now computed with the regular (fully interacting) Hamiltonian. One potential drawback is that, because of double counting, the wavefunction used to compute the energy cannot be obtained by minimizing the energy expression with respect to the wavefunction parameters. The problem is overcome by using short-range optimized effective potentials (OEPs). The resulting combination of OEP techniques with wavefunction theory has been investigated in this work, at the Hartree-Fock (HF) and multi-configuration self-consistent-field (MCSCF) levels. In the HF case, an analytical expression for the energy gradient has been derived and implemented. Calculations have been performed within the short-range local density approximation on H2, N2, Li2, and H2O. Significant improvements in binding energies are obtained with the new decomposition of the short-range energy. The importance of optimizing the short-range OEP at the MCSCF level when static correlation becomes significant has also been demonstrated for H2, using a finite-difference gradient. The implementation of the analytical gradient for MCSCF wavefunctions is currently in progress.
Potential contributions of root decomposition to the nitrogen cycle in arctic forest and tundra.
Träger, Sabrina; Milbau, Ann; Wilson, Scott D
2017-12-01
Plant contributions to the nitrogen (N) cycle from decomposition are likely to be altered by vegetation shifts associated with climate change. Roots account for the majority of soil organic matter input from vegetation, but little is known about differences between vegetation types in their root contributions to nutrient cycling. Here, we examine the potential contribution of fine roots to the N cycle in forest and tundra to gain insight into belowground consequences of the widely observed increase in woody vegetation that accompanies climate change in the Arctic. We combined measurements of root production from minirhizotron images with tissue analysis of roots from differing root diameter and color classes to obtain potential N input following decomposition. In addition, we tested for changes in N concentration of roots during early stages of decomposition, and investigated whether vegetation type (forest or tundra) affected changes in tissue N concentration during decomposition. For completeness, we also present respective measurements of leaves. The potential N input from roots was twofold greater in forest than in tundra, mainly due to greater root production in forest. Potential N input varied with root diameter and color, but this variation tended to be similar in forest and tundra. As for roots, the potential N input from leaves was significantly greater in forest than in tundra. Vegetation type had no effect on changes in root or leaf N concentration after 1 year of decomposition. Our results suggest that shifts in vegetation that accompany climate change in the Arctic will likely increase plant-associated potential N input both belowground and aboveground. In contrast, shifts in vegetation might not alter changes in tissue N concentration during early stages of decomposition. Overall, differences between forest and tundra in potential contribution of decomposing roots to the N cycle reinforce differences between habitats that occur for leaves.
Ren, Wei-ling; Guo, Jian-fen; Wu, Bo-bo; Wan, Jing-juan; Ji, Shu-rong; Liu, Xiao-fei
2015-04-01
A field experiment was conducted to understand the decomposition rates and chemical composition changes of leaf litter in logging residues of a 35-year-old secondary Castanopsis carlesii plantation over a period of one year. Mass loss rate of leaf litter showed an exponential decrease with time from May 2012 to April 2013, with a total 80% loss of initial dry mass. Net potassium (K) release was observed during this period, with only 5% of initial K remained. Nitrogen ( N) featured a pattern of accumulation at the early stage and release later, while phosphorus (P) exhibited a sequence of release, accumulation, and release. The remaining of N and P were 19% and 16% of their initial mass, respectively. The release rate was highest for K and the lowest for N. Decomposition of lignin indicated a trend of release-accumulation-release from May 2012 to October 2012, with no further significant change from November 2012 to the end of the experiment. The concentration of cellulose nearly unchanged during the experiment. The N/P rate increased with decomposition, ranging from 18.6 to 21.1. The lignin/N rate fluctuated greatly at the early stage and then almost stabilized thereafter.
Han, Xu; Cheng, Zhihui; Meng, Huanwen
2017-07-01
Garlic (Allium sativum L.) stalk is a byproduct of garlic production that is normally thought of as waste but is now considered a useful biological resource. It is necessary to utilize this resource efficiently and reasonably to reduce environmental pollution and achieve sustainable agricultural development. The effect of garlic stalk decomposed for different durations was investigated in this study using wheat (Triticum aestivum L.) and lettuce (Lactuca sativa var. crispa L.) as test plants. Garlic stalk in early stages of decomposition inhibited the shoot and root lengths of wheat and lettuce, but it promoted the shoot and root lengths in later stages; longer durations of garlic stalk decomposition significantly increased the shoot and root fresh weights of wheat and lettuce, whereas shorter decomposing durations significantly decreased the shoot and root fresh weights; and garlic stalk at different decomposition durations increased the activities of urease, sucrase and alkaline phosphatase in soil where wheat or lettuce was planted. Garlic stalk decomposed for 30 or 40 days could promote the growth of wheat and lettuce plants as well as soil enzyme activities. These results may provide a scientific basis for the study and application of garlic stalk. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Towards multi-field D-brane inflation in a warped throat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Heng-Yu; Gong, Jinn-Ouk; Koyama, Kazuya
2010-11-01
We study the inflationary dynamics in a model of slow-roll inflation in warped throat. Inflation is realized by the motion of a D-brane along the radial direction of the throat, and at later stages instabilities develop in the angular directions. We closely investigate both the single field potential relevant for the slow-roll phase, and the full multi-field one including the angular modes which becomes important at later stages. We study the main features of the instability process, discussing its possible consequences and identifying the vacua towards which the angular modes are driven.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
NASA Astrophysics Data System (ADS)
Gulis, V.; Ferreira, V. J.; Graca, M. A.
2005-05-01
Traditional approaches to assess stream ecosystem health rely on structural parameters, e.g. a variety of biotic indices. The goal of the Europe-wide RivFunction project is to develop methodology that uses functional parameters (e.g. plant litter decomposition) to this end. Here we report on decomposition experiments carried out in Portugal in five pairs of streams that differed in dissolved inorganic nutrients. On average, decomposition rates of alder and oak leaves were 2.8 and 1.4 times higher in high nutrient streams in coarse and fine mesh bags, respectively, than in corresponding reference streams. Breakdown rate correlated better with stream water SRP concentration rather than TIN. Fungal biomass and sporulation rates of aquatic hyphomycetes associated with decomposing leaves were stimulated by higher nutrient levels. Both fungal parameters measured at very early stages of decomposition (e.g. days 7-13) correlated well with overall decomposition rates. Eutrophication had no significant effect on shredder abundances in leaf bags but species richness was higher in disturbed streams. Decomposition is a key functional parameter in streams integrating many other variables and can be useful in assessing stream ecosystem health. We also argue that because decomposition is often controlled by fungal activity, microbial parameters can also be useful in bioassessment.
van Witteloostuijn, Arjen
2018-01-01
In this paper, we develop an ecological, multi-level model that can be used to study the evolution of emerging technology. More specifically, by defining technology as a system composed of a set of interacting components, we can build upon the argument of multi-level density dependence from organizational ecology to develop a distribution-independent model of technological evolution. This allows us to distinguish between different stages of component development, which provides more insight into the emergence of stable component configurations, or dominant designs. We validate our hypotheses in the biotechnology industry by using patent data from the USPTO from 1976 to 2003. PMID:29795575
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Ficken, Cari D; Wright, Justin P
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system.
Wright, Justin P.
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system. PMID:29023560
NASA Astrophysics Data System (ADS)
Gu, Rongbao; Shao, Yanmin
2016-07-01
In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.
Sihi, Debjani; Inglett, Patrick W; Gerber, Stefan; Inglett, Kanika S
2018-01-01
Temperature sensitivity of anaerobic carbon mineralization in wetlands remains poorly represented in most climate models and is especially unconstrained for warmer subtropical and tropical systems which account for a large proportion of global methane emissions. Several studies of experimental warming have documented thermal acclimation of soil respiration involving adjustments in microbial physiology or carbon use efficiency (CUE), with an initial decline in CUE with warming followed by a partial recovery in CUE at a later stage. The variable CUE implies that the rate of warming may impact microbial acclimation and the rate of carbon-dioxide (CO 2 ) and methane (CH 4 ) production. Here, we assessed the effects of warming rate on the decomposition of subtropical peats, by applying either a large single-step (10°C within a day) or a slow ramping (0.1°C/day for 100 days) temperature increase. The extent of thermal acclimation was tested by monitoring CO 2 and CH 4 production, CUE, and microbial biomass. Total gaseous C loss, CUE, and MBC were greater in the slow (ramp) warming treatment. However, greater values of CH 4 -C:CO 2 -C ratios lead to a greater global warming potential in the fast (step) warming treatment. The effect of gradual warming on decomposition was more pronounced in recalcitrant and nutrient-limited soils. Stable carbon isotopes of CH 4 and CO 2 further indicated the possibility of different carbon processing pathways under the contrasting warming rates. Different responses in fast vs. slow warming treatment combined with different endpoints may indicate alternate pathways with long-term consequences. Incorporations of experimental results into organic matter decomposition models suggest that parameter uncertainties in CUE and CH 4 -C:CO 2 -C ratios have a larger impact on long-term soil organic carbon and global warming potential than uncertainty in model structure, and shows that particular rates of warming are central to understand the response of wetland soils to global climate change. © 2017 John Wiley & Sons Ltd.
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Polymethacrylic acid as a new precursor of CuO nanoparticles
NASA Astrophysics Data System (ADS)
Hosny, Nasser Mohammed; Zoromba, Mohamed Shafick
2012-11-01
Polymethacrylic acid and its copper complexes have been synthesized and characterized. These complexes have been used as precursors to produce CuO nanoparticles by thermal decomposition in air. The stages of decompositions and the calcination temperature of the precursors have been determined from thermal analyses (TGA). The obtained CuO nanoparticles have been characterized by X-ray diffraction (XRD), scanning tunneling microscopy (STM) and transmission electron microscopy (TEM). XRD showed a monoclinic structure with particle size 8-20 nm for the synthesized copper oxide nanoparticles. These nanoparticles are catalytically active in decomposing hydrogen peroxide and a mechanism of decomposition has been suggested.
NASA Astrophysics Data System (ADS)
Ladriere, J.
1992-04-01
The thermal decompositions of K3Fe(ox)3 3 H2O and K2Fe(ox)2 2 H2O in nitrogen have been studied using Mössbauer spectroscopy, X-ray diffraction and thermal analysis methods in order to determine the nature of the solid residues obtained after each stage of decomposition. Particularly, after dehydration at 113°C, the ferric complex is reduced into a ferrous compound, with a quadrupole splitting of 3.89 mm/s, which corresponds to the anhydrous form of K2Fe(ox)2 2 H2O.
Zielinska, Magdalena; Markowski, Marek
2016-04-01
The aim of this study was to determine the effect of: (a) different drying methods, (b) hot air temperature in a convection oven, and (c) the moisture content of fruits dehydrated by multi-stage drying which involves a transition between different stages of drying, on the rehydration kinetics of dry blueberries. Models describing rehydration kinetics were also studied. Blueberries dehydrated by multi-stage microwave-assisted drying, which involved a hot air pre-drying step at 80 °C until the achievement of a moisture content of 1.95 kg H2O kg(-1)DM, were characterized by significantly higher rates of initial and successive rehydration as well as smaller initial loss of soluble solids in comparison with the samples dried by other methods. The highest initial rehydration rate and the smallest loss of soluble solids after 30 min of soaking were determined at 0.46 min(-1) and 0.29 kg DM kg(-1)DM, respectively. The Peleg model and the first-order-kinetic model fit the experimental data well. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Effects of tree species fine root decomposition on soil active organic carbon].
Liu, Yan; Wang, Si-Long; Wang, Xiao-Wei; Yu, Xiao-Jun; Yang, Yue-Jun
2007-03-01
With incubation test, this paper studied the effects of fine root decomposition of Alnus cremastogyne, Cunninghamia lanceolata and Michelia macclurei on the content of soil active organic carbon at 9 degrees C , 14 degrees C , 24 degrees C and 28 degrees C. The results showed that the decomposition rate of fine root differed significantly with test tree species, which was decreased in the order of M. macclurei > A. cremastogyne > C. lanceolata. The decomposition rate was increased with increasing temperature, but declined with prolonged incubation time. Fine root source, incubation temperature, and incubation time all affected the contents of soil microbial biomass carbon and water-soluble organic carbon. The decomposition of fine root increased soil microbial biomass carbon and water-soluble organic carbon significantly, and the effect decreased in the order of M. macclurei > A. cremastogyne > C. lanceolata. Higher contents of soil microbial biomass carbon and water-soluble organic carbon were observed at medium temperature and middle incubation stage. Fine root decomposition had less effect on the content of soil readily oxidized organic carbon.
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Titova, I. A.; Barkalov, S. A.
2018-03-01
The article presents an algorithm for obtaining an integral assessment of the quality of an organization from the perspective of customers, based on the method of aggregating linguistic information on a multilevel hierarchical system of quality assessment. The algorithm is of a constructive nature, it provides not only the possibility of obtaining an integral evaluation, but also the development of a quality improvement strategy based on the method of linguistic decomposition, which forms the minimum set of areas of work with clients whose quality change will allow obtaining the required level of integrated quality assessment.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
FACETS: multi-faceted functional decomposition of protein interaction networks
Seah, Boon-Siew; Bhowmick, Sourav S.; Forbes Dewey, C.
2012-01-01
Motivation: The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein–protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. Results: We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Contact: seah0097@ntu.edu.sg or assourav@ntu.edu.sg Supplementary information: Supplementary data are available at the Bioinformatics online. Availability: Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/∼assourav/Facets/ PMID:22908217
Efficient material decomposition method for dual-energy X-ray cargo inspection system
NASA Astrophysics Data System (ADS)
Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong
2018-03-01
Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.
Kim, Il Kwang; Lee, Soo Il
2016-05-01
The modal decomposition of tapping mode atomic force microscopy microcantilevers in liquid environments was studied experimentally. Microcantilevers with different lengths and stiffnesses and two sample surfaces with different elastic moduli were used in the experiment. The response modes of the microcantilevers were extracted as proper orthogonal modes through proper orthogonal decomposition. Smooth orthogonal decomposition was used to estimate the resonance frequency directly. The effects of the tapping setpoint and the elastic modulus of the sample under test were examined in terms of their multi-mode responses with proper orthogonal modes, proper orthogonal values, smooth orthogonal modes and smooth orthogonal values. Regardless of the stiffness of the microcantilever under test, the first mode was dominant in tapping mode atomic force microscopy under normal operating conditions. However, at lower tapping setpoints, the flexible microcantilever showed modal distortion and noise near the tip when tapping on a hard sample. The stiff microcantilever had a higher mode effect on a soft sample at lower tapping setpoints. Modal decomposition for tapping mode atomic force microscopy can thus be used to estimate the characteristics of samples in liquid environments.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Rodríguez, Nibaldo
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
NASA Astrophysics Data System (ADS)
Zhong, X. Y.; Gao, J. X.; Ren, H.; Cai, W. G.
2018-04-01
The acceleration of the urbanization process has brought new opportunities for China’s development. With the rapid economic development and people’s living standards improving, building energy consumption also showed a rigid growth trend. With the continuous improvement of the level of industrialization, industrial energy-saving potential declines. The construction industry to bear the task of energy-saving emission reduction will face more severe challenges. As the three municipalities of China, Beijing, Shanghai and Chongqing have significant radiation effects in the economy, urbanization level and construction industry development of the region. Therefore, it is of great significance to study the building energy consumption in the three regions with the change of urbanization level and the key factors. Based on the data of Beijing, Shanghai and Chongqing from 2001 to 2015, this paper attempts to find out whether the EKC curve of building energy consumption exists. At the same time, based on the results of the model, the data of the three regions are divided into three periods. The exponential decomposition method (LMDI) is used to find out the factors that have the greatest impact on the energy consumption of buildings in different stages. Moreover, analyzes the policy background of each stage and puts forward some policy suggestions on this basis.
Longitudinal analysis on the development of hospital quality management systems in the Netherlands.
Dückers, Michel; Makai, Peter; Vos, Leti; Groenewegen, Peter; Wagner, Cordula
2009-10-01
Many changes have been initiated in the Dutch hospital sector to optimize health-care delivery: national agenda-setting, increased competition and transparency, a new system of hospital reimbursement based on diagnosis-treatment combinations, intensified monitoring of quality and a multi-layered organizational development programme based on quality improvement collaboratives. The objective is to answer the question as to whether these changes were accompanied by a further development of hospital quality management systems and to what extent did the development within the multi-layered programme hospitals differ from that in other hospitals. Longitudinal data were collected in 1995, 2000, 2005 and 2007 using a validated questionnaire. Descriptive analyses and multi-level modelling were applied to test whether: (1) quality management system development stages in hospitals differ over time, (2) development stages and trends differ between hospitals participating or not participating in the multi-layered programme and (3) hospital size has an effect on development stage. Dutch hospital sector between 1995 and 2007. Hospital organizations. Changes through time. Quality management system development stage. Since 1995, hospital quality management systems have reached higher development levels. Programme participants have developed their quality management system more rapidly than have non-participants. However, this effect is confounded by hospital size. Study results suggest that the combination of policy measures at macro level was accompanied by an increase in hospital size and the further development of quality management systems. Hospitals are entering the stage of systematic quality improvement.
Luo, Yantao; Zhang, Long; Teng, Zhidong; DeAngelis, Donald L.
2018-01-01
In this paper, a parasitism-mutualism-predation model is proposed to investigate the dynamics of multi-interactions among cuckoos, crows and cats with stage-structure and maturation time delays on cuckoos and crows. The crows permit the cuckoos to parasitize their nestlings (eggs) on the crow chicks (eggs). In return, the cuckoo nestlings produce a malodorous cloacal secretion to protect the crow chicks from predation by the cats, which is apparently beneficial to both the crow and cuckoo population. The multi-interactions, i.e., parasitism and mutualism between the cuckoos (nestlings) and crows (chicks), predation between the cats and crow chicks are modeled both by Holling-type II and Beddington-DeAngelis-type functional responses. The existence of positive equilibria of three subsystems of the model are discussed. The criteria for the global stability of the trivial equilibrium are established by the Krein-Rutman Theorem and other analysis methods. Moreover, the threshold dynamics for the coexistence and weak persistence of the model are obtained, and we show, both analytically and numerically, that the stabilities of the interior equilibria may change with the increasing maturation time delays. We find there exists an evident difference in the dynamical properties of the parasitism-mutualism-predation model based on whether or not we consider the effects of stage-structure and maturation time delays on cuckoos and crows. Inclusion of stage structure results in many varied dynamical complexities which are difficult to encompass without this inclusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
NASA Astrophysics Data System (ADS)
Goodwin, I. D.; Mortlock, T.
2016-02-01
Geohistorical archives of shoreline and foredune planform geometry provides a unique evidence-based record of the time integral response to coupled directional wave climate and sediment supply variability on annual to multi-decadal time scales. We develop conceptual shoreline modelling from the geohistorical shoreline archive using a novel combination of methods, including: LIDAR DEM and field mapping of coastal geology; a decadal-scale climate reconstruction of sea-level pressure, marine windfields, and paleo-storm synoptic type and frequency, and historical bathymetry. The conceptual modelling allows for the discrimination of directional wave climate shifts and the relative contributions of cross-shore and along-shore sand supply rates at multi-decadal resolution. We present regional examples from south-eastern Australia over a large latitudinal gradient from subtropical Queensland (S 25°) to mid-latitude Bass Strait (S 40°) that illustrate the morphodynamic evolution and reorganization to wave climate change. We then use the conceptual modeling to inform a two-dimensional coupled spectral wave-hydrodynamic-morphodynamic model to investigate the shoreface response to paleo-directional wind and wave climates. Unlike one-line shoreline modelling, this fully dynamical approach allows for the investigation of cumulative and spatial bathymetric change due to wave-induced currents, as well as proxy-shoreline change. The fusion of the two modeling approaches allows for: (i) the identification of the natural range of coastal planform geometries in response to wave climate shifts; and, (ii) the decomposition of the multidecadal coastal change into the cross-shore and along-shore sand supply drivers, according to the best-matching planforms.
Multi-annual modes in the 20th century temperature variability in reanalyses and CMIP5 models
NASA Astrophysics Data System (ADS)
Järvinen, Heikki; Seitola, Teija; Silén, Johan; Räisänen, Jouni
2016-11-01
A performance expectation is that Earth system models simulate well the climate mean state and the climate variability. To test this expectation, we decompose two 20th century reanalysis data sets and 12 CMIP5 model simulations for the years 1901-2005 of the monthly mean near-surface air temperature using randomised multi-channel singular spectrum analysis (RMSSA). Due to the relatively short time span, we concentrate on the representation of multi-annual variability which the RMSSA method effectively captures as separate and mutually orthogonal spatio-temporal components. This decomposition is a unique way to separate statistically significant quasi-periodic oscillations from one another in high-dimensional data sets.The main results are as follows. First, the total spectra for the two reanalysis data sets are remarkably similar in all timescales, except that the spectral power in ERA-20C is systematically slightly higher than in 20CR. Apart from the slow components related to multi-decadal periodicities, ENSO oscillations with approximately 3.5- and 5-year periods are the most prominent forms of variability in both reanalyses. In 20CR, these are relatively slightly more pronounced than in ERA-20C. Since about the 1970s, the amplitudes of the 3.5- and 5-year oscillations have increased, presumably due to some combination of forced climate change, intrinsic low-frequency climate variability, or change in global observing network. Second, none of the 12 coupled climate models closely reproduce all aspects of the reanalysis spectra, although some models represent many aspects well. For instance, the GFDL-ESM2M model has two nicely separated ENSO periods although they are relatively too prominent as compared with the reanalyses. There is an extensive Supplement and YouTube videos to illustrate the multi-annual variability of the data sets.
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourret, D.; Mertens, J. C. E.; Lieberman, E.
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less
Tourret, D.; Mertens, J. C. E.; Lieberman, E.; ...
2017-09-13
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less
NASA Astrophysics Data System (ADS)
Tourret, D.; Mertens, J. C. E.; Lieberman, E.; Imhoff, S. D.; Gibbs, J. W.; Henderson, K.; Fezzaa, K.; Deriy, A. L.; Sun, T.; Lebensohn, R. A.; Patterson, B. M.; Clarke, A. J.
2017-11-01
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure, supported by quantitative simulations of microstructure formation and its mechanical behavior.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poole, E. L.
1986-01-01
In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.
Multivariate meta-analysis for non-linear and other multi-parameter associations
Gasparrini, A; Armstrong, B; Kenward, M G
2012-01-01
In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043
Fashion sketch design by interactive genetic algorithms
NASA Astrophysics Data System (ADS)
Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.
2012-11-01
Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.
ERIC Educational Resources Information Center
Dubinsky, Ed; Arnon, Ilana; Weller, Kirk
2013-01-01
In this article, we obtain a genetic decomposition of students' progress in developing an understanding of the decimal 0.9 and its relation to 1. The genetic decomposition appears to be valid for a high percentage of the study participants and suggests the possibility of a new stage in APOS Theory that would be the first substantial change in…
Krisman, Alex; Hawkes, Evatt R.; Talei, Mohsen; ...
2016-08-30
With the goal of providing a more detailed fundamental understanding of ignition processes in diesel engines, this study reports analysis of a direct numerical simulation (DNS) database. In the DNS, a pseudo turbulent mixing layer of dimethyl ether (DME) at 400 K and air at 900 K is simulated at a pressure of 40 atmospheres. At these conditions, DME exhibits a two-stage ignition and resides within the negative temperature coefficient (NTC) regime of ignition delay times, similar to diesel fuel. The analysis reveals a complex ignition process with several novel features. Autoignition occurs as a distributed, two-stage event. The high-temperaturemore » stage of ignition establishes edge flames that have a hybrid premixed/autoignition flame structure similar to that previously observed for lifted laminar flames at similar thermochemical conditions. In conclusion, a combustion mode analysis based on key radical species illustrates the multi-stage and multi-mode nature of the ignition process and highlights the substantial modelling challenge presented by diesel combustion.« less
Preliminary Exploration of Encounter During Transit Across Southern Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroud, Phillip David; Cuellar-Hengartner, Leticia; Kubicek, Deborah Ann
Los Alamos National Laboratory (LANL) is utilizing the Probability Effectiveness Methodology (PEM) tools, particularly the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) to support the DNDO Architecture and Planning Directorate’s (APD) development of a multi-region terrorist risk assessment tool. The effort is divided into three stages. The first stage is an exploration of what can be done with PATRIOT essentially as is, to characterize encounter rate during transit across a single selected region. The second stage is to develop, condition, and implement required modifications to the data and conduct analysis to generate a well-founded assessment of the transitmore » reliability across that selected region, and to identify any issues in the process. The final stage is to extend the work to a full multi-region global model. This document provides the results of the first stage, namely preliminary explorations with PATRIOT to assess the transit reliability across the region of southern Africa.« less
Silicon nitride equation of state
NASA Astrophysics Data System (ADS)
Brown, Robert C.; Swaminathan, Pazhayannur K.
2017-01-01
This report presents the development of a global, multi-phase equation of state (EOS) for the ceramic silicon nitride (Si3N4).1 Structural forms include amorphous silicon nitride normally used as a thin film and three crystalline polymorphs. Crystalline phases include hexagonal α-Si3N4, hexagonal β-Si3N4, and the cubic spinel c-Si3N4. Decomposition at about 1900 °C results in a liquid silicon phase and gas phase products such as molecular nitrogen, atomic nitrogen, and atomic silicon. The silicon nitride EOS was developed using EOSPro which is a new and extended version of the PANDA II code. Both codes are valuable tools and have been used successfully for a variety of material classes. Both PANDA II and EOSPro can generate a tabular EOS that can be used in conjunction with hydrocodes. The paper describes the development efforts for the component solid phases and presents results obtained using the EOSPro phase transition model to investigate the solid-solid phase transitions in relation to the available shock data that have indicated a complex and slow time dependent phase change to the c-Si3N4 phase. Furthermore, the EOSPro mixture model is used to develop a model for the decomposition products; however, the need for a kinetic approach is suggested to combine with the single component solid models to simulate and further investigate the global phase coexistences.
Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa
2017-09-01
When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with <1 year of experience, demonstrated TBS statistically significantly different than the on-site value, suggesting that with experience, observations of human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.
Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR
NASA Astrophysics Data System (ADS)
Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie
2014-01-01
The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.
Microbial community assembly and metabolic function during mammalian corpse decomposition.
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-08
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations. Copyright © 2016, American Association for the Advancement of Science.
Azwandi, A; Abu Hassan, A
2009-04-01
This study was carried out in an oil palm plantation in Bandar Baharu, Kedah using monkey carcasses and focuses in documenting the decomposition and dipteran colonization sequences in 50 days. This is the first study of Diptera associated with the exploitation of carcasses conducted in the north of peninsular Malaysia during the dry and wet seasons thereat. During the process of decomposition in both seasons, five phases of decay were recognized namely fresh, bloated, active decay, advance decay and dry remain. In this decomposition study, biomass loss of carcass occurred rapidly during the fresh to active decay stage due to the colonization and feeding activity of the Diptera larvae. The duration of the fresh and bloated stages of decay were the same in wet and dry seasons but later stages of decay were markedly shorter during the wet season. Twenty one species of adult Diptera were identified colonizing carcasses in the study period. Among the flies from the family Calliphoridae, Chrysomya megacephala Fabricius and Chrysomya nigripes Aubertin were recognized as the earliest arrivals on the first day of exposure. Adult Ch. nigripes was abundant for approximately two weeks after placement of the carcasses. By comparing the percentages of adults collected during the study period, the calliphorids abundance in percentages in wet season was 50.83%, but in dry season, the abundance was only about 35.2%. In contrast, the percentage of Sphaeroceridae in wet season was only 3.33%, but in the dry season, the abundance was 20.8%. Dipteran in family Phoridae, Piophilidae, Sepsidae, Drosophilidae and Dolichopodidae colonized the carcasses for a long period of time and were categorized as long term colonizers.
Reznick, Julia; Friedmann, Naama
2015-01-01
This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159
Lee, Jung-Gil; Kim, Woo-Seung; Choi, June-Seok; Ghaffour, Noreddine; Kim, Young-Deuk
2016-12-15
An economic desalination system with a small scale and footprint for remote areas, which have a limited and inadequate water supply, insufficient water treatment and low infrastructure, is strongly demanded in the desalination markets. Here, a direct contact membrane distillation (DCMD) process has the simplest configuration and potentially the highest permeate flux among all of the possible MD processes. This process can also be easily instituted in a multi-stage manner for enhanced compactness, productivity, versatility and cost-effectiveness. In this study, an innovative, multi-stage, DCMD module under countercurrent-flow configuration is first designed and then investigate both theoretically and experimentally to identify its feasibility and operability for desalination application. Model predictions and measured data for mean permeate flux are compared and shown to be in good agreement. The effect of the number of module stages on the mean permeate flux, performance ratio and daily water production of the MDCMD system has been theoretically identified at inlet feed and permeate flow rates of 1.5 l/min and inlet feed and permeate temperatures of 70 °C and 25 °C, respectively. The daily water production of a three-stage DCMD module with a membrane area of 0.01 m 2 at each stage is found to be 21.5 kg. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stylianopoulos, Triantafyllos; Economides, Eva-Athena; Baish, James W; Fukumura, Dai; Jain, Rakesh K
2015-09-01
Conventional drug delivery systems for solid tumors are composed of a nano-carrier that releases its therapeutic load. These two-stage nanoparticles utilize the enhanced permeability and retention (EPR) effect to enable preferential delivery to tumor tissue. However, the size-dependency of the EPR, the limited penetration of nanoparticles into the tumor as well as the rapid binding of the particles or the released cytotoxic agents to cancer cells and stromal components inhibit the uniform distribution of the drug and the efficacy of the treatment. Here, we employ mathematical modeling to study the effect of particle size, drug release rate and binding affinity on the distribution and efficacy of nanoparticles to derive optimal design rules. Furthermore, we introduce a new multi-stage delivery system. The system consists of a 20-nm primary nanoparticle, which releases 5-nm secondary particles, which in turn release the chemotherapeutic drug. We found that tuning the drug release kinetics and binding affinities leads to improved delivery of the drug. Our results also indicate that multi-stage nanoparticles are superior over two-stage nano-carriers provided they have a faster drug release rate and for high binding affinity drugs. Furthermore, our results suggest that smaller nanoparticles achieve better treatment outcome.
NASA Astrophysics Data System (ADS)
Sakakibara, Kazutoshi; Tian, Yajie; Nishikawa, Ikuko
We discuss the planning of transportation by trucks over a multi-day period. Each truck collects loads from suppliers and delivers them to assembly plants or a truck terminal. By exploiting the truck terminal as a temporal storage, we aim to increase the load ratio of each truck and to minimize the lead time for transportation. In this paper, we show a mixed integer programming model which represents each product explicitly, and discuss the decomposition of the problem into a problem of delivery and storage, and a problem of vehicle routing. Based on this model, we propose a relax-and-fix type heuristic in which decision variables are fixed one by one by mathematical programming techniques such as branch-and-bound methods.
NASA Astrophysics Data System (ADS)
León, Madeleine; Escalante-Ramirez, Boris
2013-11-01
Knee osteoarthritis (OA) is characterized by the morphological degeneration of cartilage. Efficient segmentation of cartilage is important for cartilage damage diagnosis and to support therapeutic responses. We present a method for knee cartilage segmentation in magnetic resonance images (MRI). Our method incorporates the Hermite Transform to obtain a hierarchical decomposition of contours which describe knee cartilage shapes. Then, we compute a statistical model of the contour of interest from a set of training images. Thereby, our Hierarchical Active Shape Model (HASM) captures a large range of shape variability even from a small group of training samples, improving segmentation accuracy. The method was trained with a training set of 16- MRI of knee and tested with leave-one-out method.
McLaren, Jennie R; Buckeridge, Kate M; van de Weg, Martine J; Shaver, Gaius R; Schimel, Joshua P; Gough, Laura
2017-05-01
Rapid arctic vegetation change as a result of global warming includes an increase in the cover and biomass of deciduous shrubs. Increases in shrub abundance will result in a proportional increase of shrub litter in the litter community, potentially affecting carbon turnover rates in arctic ecosystems. We investigated the effects of leaf and root litter of a deciduous shrub, Betula nana, on decomposition, by examining species-specific decomposition patterns, as well as effects of Betula litter on the decomposition of other species. We conducted a 2-yr decomposition experiment in moist acidic tundra in northern Alaska, where we decomposed three tundra species (Vaccinium vitis-idaea, Rhododendron palustre, and Eriophorum vaginatum) alone and in combination with Betula litter. Decomposition patterns for leaf and root litter were determined using three different measures of decomposition (mass loss, respiration, extracellular enzyme activity). We report faster decomposition of Betula leaf litter compared to other species, with support for species differences coming from all three measures of decomposition. Mixing effects were less consistent among the measures, with negative mixing effects shown only for mass loss. In contrast, there were few species differences or mixing effects for root decomposition. Overall, we attribute longer-term litter mass loss patterns to patterns created by early decomposition processes in the first winter. We note numerous differences for species patterns between leaf and root decomposition, indicating that conclusions from leaf litter experiments should not be extrapolated to below-ground decomposition. The high decomposition rates of Betula leaf litter aboveground, and relatively similar decomposition rates of multiple species below, suggest a potential for increases in turnover in the fast-decomposing carbon pool of leaves and fine roots as the dominance of deciduous shrubs in the Arctic increases, but this outcome may be tempered by negative litter mixing effects during the early stages of encroachment. © 2017 by the Ecological Society of America.
Energy Technology Allocation for Distributed Energy Resources: A Technology-Policy Framework
NASA Astrophysics Data System (ADS)
Mallikarjun, Sreekanth
Distributed energy resources (DER) are emerging rapidly. New engineering technologies, materials, and designs improve the performance and extend the range of locations for DER. In contrast, constructing new or modernizing existing high voltage transmission lines for centralized generation are expensive and challenging. In addition, customer demand for reliability has increased and concerns about climate change have created a pull for swift renewable energy penetration. In this context, DER policy makers, developers, and users are interested in determining which energy technologies to use to accommodate different end-use energy demands. We present a two-stage multi-objective strategic technology-policy framework for determining the optimal energy technology allocation for DER. The framework simultaneously considers economic, technical, and environmental objectives. The first stage utilizes a Data Envelopment Analysis model for each end-use to evaluate the performance of each energy technology based on the three objectives. The second stage incorporates factor efficiencies determined in the first stage, capacity limitations, dispatchability, and renewable penetration for each technology, and demand for each end-use into a bottleneck multi-criteria decision model which provides the Pareto-optimal energy resource allocation. We conduct several case studies to understand the roles of various distributed energy technologies in different scenarios. We construct some policy implications based on the model results of set of case studies.
Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition
NASA Astrophysics Data System (ADS)
Hong, Sang-Hoon; Wdowinski, Shimon
2013-08-01
Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.