Sample records for decomposition approach based

  1. A compositional approach to building applications in a computational environment

    NASA Astrophysics Data System (ADS)

    Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.

    2014-04-01

    The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.

  2. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  3. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  4. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  5. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  6. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  7. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  8. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  9. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  10. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    NASA Astrophysics Data System (ADS)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  11. Decomposition of heterogeneous organic matterand its long-term stabilization in soils

    USGS Publications Warehouse

    Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.

    2011-01-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.

  12. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  13. An efficient computational approach to model statistical correlations in photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian; Maier, Joscha; Sawall, Stefan

    2016-07-15

    Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less

  14. Water/cortical bone decomposition: A new approach in dual energy CT imaging for bone marrow oedema detection. A feasibility study.

    PubMed

    Biondi, M; Vanzi, E; De Otto, G; Banci Buonamici, F; Belmonte, G M; Mazzoni, L N; Guasti, A; Carbone, S F; Mazzei, M A; La Penna, A; Foderà, E; Guerreri, D; Maiolino, A; Volterrani, L

    2016-12-01

    Many studies aimed at validating the application of Dual Energy Computed Tomography (DECT) in clinical practice where conventional CT is not exhaustive. An example is given by bone marrow oedema detection, in which DECT based on water/calcium (W/Ca) decomposition was applied. In this paper a new DECT approach, based on water/cortical bone (W/CB) decomposition, was investigated. Eight patients suffering from marrow oedema were scanned with MRI and DECT. Two-materials density decomposition was performed in ROIs corresponding to normal bone marrow and oedema. These regions were drawn on DECT images using MRI informations. Both W/Ca and W/CB were considered as material basis. Scatter plots of W/Ca and W/CB concentrations were made for each ROI in order to evaluate if oedema could be distinguished from normal bone marrow. Thresholds were defined on the scatter plots in order to produce DECT images where oedema regions were highlighted through color maps. The agreement between these images and MR was scored by two expert radiologists. For all the patients, the best scores were obtained using W/CB density decomposition. In all cases, DECT color map images based on W/CB decomposition showed better agreement with MR in bone marrow oedema identification with respect to W/Ca decomposition. This result encourages further studies in order to evaluate if DECT based on W/CB decomposition could be an alternative technique to MR, which would be important when short scanning duration is relevant, as in the case of aged or traumatic patients. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    PubMed

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  17. An innovative approach for characteristic analysis and state-of-health diagnosis for a Li-ion cell based on the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Cho, B. H.

    2014-08-01

    This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.

  18. Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staschus, K.

    1985-01-01

    In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less

  19. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  20. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  1. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    NASA Astrophysics Data System (ADS)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  2. The suitability of visual taphonomic methods for digital photographs: An experimental approach with pig carcasses in a tropical climate.

    PubMed

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O

    2018-05-01

    In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  4. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  5. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  6. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  7. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  8. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  9. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  10. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  11. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  12. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.

  13. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  14. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  15. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. G W calculations using the spectral decomposition of the dielectric matrix: Verification, validation, and comparison of methods

    DOE PAGES

    Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...

    2013-04-26

    Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less

  17. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  18. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE PAGES

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    2016-02-01

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  19. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  20. Gas Pressure Monitored Iodide-Catalyzed Decomposition Kinetics of H[subscript 2]O[subscript 2]: Initial-Rate and Integrated-Rate Methods in the General Chemistry Lab

    ERIC Educational Resources Information Center

    Nyasulu, Frazier; Barlag, Rebecca

    2010-01-01

    The reaction kinetics of the iodide-catalyzed decomposition of [subscript 2]O[subscript 2] using the integrated-rate method is described. The method is based on the measurement of the total gas pressure using a datalogger and pressure sensor. This is a modification of a previously reported experiment based on the initial-rate approach. (Contains 2…

  1. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    NASA Astrophysics Data System (ADS)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  2. Kinetics and mechanism of solid decompositions — From basic discoveries by atomic absorption spectrometry and quadrupole mass spectroscopy to thorough thermogravimetric analysis

    NASA Astrophysics Data System (ADS)

    L'vov, Boris V.

    2008-02-01

    This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.

  3. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  4. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  5. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  6. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  7. S-matrix decomposition, natural reaction channels, and the quantum transition state approach to reactive scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de

    2016-05-28

    A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less

  8. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  9. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  10. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  11. Structural system identification based on variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  12. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  13. Time-dependent density functional theory for open systems with a positivity-preserving decomposition scheme for environment spectral functions

    NASA Astrophysics Data System (ADS)

    Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung

    2015-04-01

    Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.

  14. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  15. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  16. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  17. Robust-mode analysis of hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  18. Modular analysis of biological networks.

    PubMed

    Kaltenbach, Hans-Michael; Stelling, Jörg

    2012-01-01

    The analysis of complex biological networks has traditionally relied on decomposition into smaller, semi-autonomous units such as individual signaling pathways. With the increased scope of systems biology (models), rational approaches to modularization have become an important topic. With increasing acceptance of de facto modularity in biology, widely different definitions of what constitutes a module have sparked controversies. Here, we therefore review prominent classes of modular approaches based on formal network representations. Despite some promising research directions, several important theoretical challenges remain open on the way to formal, function-centered modular decompositions for dynamic biological networks.

  19. The predictive power of singular value decomposition entropy for stock market dynamics

    NASA Astrophysics Data System (ADS)

    Caraiani, Petre

    2014-01-01

    We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.

  20. Numeric Modified Adomian Decomposition Method for Power System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less

  1. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  2. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  3. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  4. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    PubMed

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2018-07-01

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  5. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  6. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  7. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  8. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  9. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  10. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  11. Nonlinear viscoelastic characterization of human vocal fold tissues under large-amplitude oscillatory shear (LAOS)

    PubMed Central

    Chan, Roger W.

    2018-01-01

    Viscoelastic shear properties of human vocal fold tissues were previously quantified by the shear moduli (G′ and G″). Yet these small-strain linear measures were unable to describe any nonlinear tissue behavior. This study attempted to characterize the nonlinear viscoelastic response of the vocal fold lamina propria under large-amplitude oscillatory shear (LAOS) with a stress decomposition approach. Human vocal fold cover and vocal ligament specimens from eight subjects were subjected to LAOS rheometric testing with a simple-shear rheometer. The empirical total stress response was decomposed into elastic and viscous stress components, based on odd-integer harmonic decomposition approach with Fourier transform. Nonlinear viscoelastic measures derived from the decomposition were plotted in Pipkin space and as rheological fingerprints to observe the onset of nonlinearity and the type of nonlinear behavior. Results showed that both the vocal fold cover and the vocal ligament experienced intercycle strain softening, intracycle strain stiffening, as well as shear thinning both intercycle and intracycle. The vocal ligament appeared to demonstrate an earlier onset of nonlinearity at phonatory frequencies, and higher sensitivity to changes in frequency and strain. In summary, the stress decomposition approach provided much better insights into the nonlinear viscoelastic behavior of the vocal fold lamina propria than the traditional linear measures. PMID:29780189

  12. Nonlinear viscoelastic characterization of human vocal fold tissues under large-amplitude oscillatory shear (LAOS).

    PubMed

    Chan, Roger W

    2018-05-01

    Viscoelastic shear properties of human vocal fold tissues were previously quantified by the shear moduli ( G' and G″ ). Yet these small-strain linear measures were unable to describe any nonlinear tissue behavior. This study attempted to characterize the nonlinear viscoelastic response of the vocal fold lamina propria under large-amplitude oscillatory shear (LAOS) with a stress decomposition approach. Human vocal fold cover and vocal ligament specimens from eight subjects were subjected to LAOS rheometric testing with a simple-shear rheometer. The empirical total stress response was decomposed into elastic and viscous stress components, based on odd-integer harmonic decomposition approach with Fourier transform. Nonlinear viscoelastic measures derived from the decomposition were plotted in Pipkin space and as rheological fingerprints to observe the onset of nonlinearity and the type of nonlinear behavior. Results showed that both the vocal fold cover and the vocal ligament experienced intercycle strain softening, intracycle strain stiffening, as well as shear thinning both intercycle and intracycle. The vocal ligament appeared to demonstrate an earlier onset of nonlinearity at phonatory frequencies, and higher sensitivity to changes in frequency and strain. In summary, the stress decomposition approach provided much better insights into the nonlinear viscoelastic behavior of the vocal fold lamina propria than the traditional linear measures.

  13. Impact of joint statistical dual-energy CT reconstruction of proton stopping power images: Comparison to image- and sinogram-domain material decomposition approaches.

    PubMed

    Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2018-05-01

    The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.

  14. A general framework of noise suppression in material decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less

  15. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less

  16. Three geographic decomposition approaches in transportation network analysis

    DOT National Transportation Integrated Search

    1980-03-01

    This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...

  17. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  18. A Thermodynamically Consistent Approach to Phase-Separating Viscous Fluids

    NASA Astrophysics Data System (ADS)

    Anders, Denis; Weinberg, Kerstin

    2018-04-01

    The de-mixing properties of heterogeneous viscous fluids are determined by an interplay of diffusion, surface tension and a superposed velocity field. In this contribution a variational model of the decomposition, based on the Navier-Stokes equations for incompressible laminar flow and the extended Korteweg-Cahn-Hilliard equations, is formulated. An exemplary numerical simulation using C1-continuous finite elements demonstrates the capability of this model to compute phase decomposition and coarsening of the moving fluid.

  19. Direct water decomposition on transition metal surfaces: Structural dependence and catalytic screening

    DOE PAGES

    Tsai, Charlie; Lee, Kyoungjin; Yoo, Jong Suk; ...

    2016-02-16

    Density functional theory calculations are used to investigate thermal water decomposition over the close-packed (111), stepped (211), and open (100) facets of transition metal surfaces. A descriptor-based approach is used to determine that the (211) facet leads to the highest possible rates. As a result, a range of 96 binary alloys were screened for their potential activity and a rate control analysis was performed to assess how the overall rate could be improved.

  20. Measuring Prices in Health Care Markets Using Commercial Claims Data.

    PubMed

    Neprash, Hannah T; Wallace, Jacob; Chernew, Michael E; McWilliams, J Michael

    2015-12-01

    To compare methods of price measurement in health care markets. Truven Health Analytics MarketScan commercial claims. We constructed medical prices indices using three approaches: (1) a "sentinel" service approach based on a single common service in a specific clinical domain, (2) a market basket approach, and (3) a spending decomposition approach. We constructed indices at the Metropolitan Statistical Area level and estimated correlations between and within them. Price indices using a spending decomposition approach were strongly and positively correlated with indices constructed from broad market baskets of common services (r > 0.95). Prices of single common services exhibited weak to moderate correlations with each other and other measures. Market-level price measures that reflect broad sets of services are likely to rank markets similarly. Price indices relying on individual sentinel services may be more appropriate for examining specialty- or service-specific drivers of prices. © Health Research and Educational Trust.

  1. Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.

    PubMed

    Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino

    2017-01-10

    In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.

  2. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  3. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  4. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  5. Quantifying polymer deformation in viscoelastic turbulence: the geometric decomposition and a Riemannian approach to scalar measures

    NASA Astrophysics Data System (ADS)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer; Gayme, Dennice

    2017-11-01

    We develop a new framework to quantify the fluctuating behaviour of the conformation tensor in viscoelastic turbulent flows. This framework addresses two shortcomings of the classical approach based on Reynolds decomposition: the fluctuating part of the conformation tensor is not guaranteed to be positive definite and it does not consistently represent polymer expansions and contractions about the mean. Our approach employs a geometric decomposition that yields a positive-definite fluctuating conformation tensor with a clear physical interpretation as a deformation to the mean conformation. We propose three scalar measures of this fluctuating conformation tensor, which respect the non-Euclidean Riemannian geometry of the manifold of positive-definite tensors: fluctuating polymer volume, geodesic distance from the mean, and an anisotropy measure. We use these scalar quantities to investigate drag-reduced viscoelastic turbulent channel flow. Our approach establishes a systematic method to study viscoelastic turbulence. It also uncovers interesting phenomena that are not apparent using traditional analysis tools, including a logarithmic decrease in anisotropy of the mean conformation tensor away from the wall and polymer fluctuations peaking beyond the buffer layer. This work has been partially funded by the following NSF Grants: CBET-1652244, OCE-1633124, CBET-1511937.

  6. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase.

    PubMed

    Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De

    2016-04-01

    One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  7. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  8. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  9. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  10. Introducing the Improved Heaviside Approach to Partial Fraction Decomposition to Undergraduate Students: Results and Implications from a Pilot Study

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…

  11. Interacting Microbe and Litter Quality Controls on Litter Decomposition: A Modeling Analysis

    PubMed Central

    Moorhead, Daryl; Lashermes, Gwenaëlle; Recous, Sylvie; Bertrand, Isabelle

    2014-01-01

    The decomposition of plant litter in soil is a dynamic process during which substrate chemistry and microbial controls interact. We more clearly quantify these controls with a revised version of the Guild-based Decomposition Model (GDM) in which we used a reverse Michaelis-Menten approach to simulate short-term (112 days) decomposition of roots from four genotypes of Zea mays that differed primarily in lignin chemistry. A co-metabolic relationship between the degradation of lignin and holocellulose (cellulose+hemicellulose) fractions of litter showed that the reduction in decay rate with increasing lignin concentration (LCI) was related to the level of arabinan substitutions in arabinoxylan chains (i.e., arabinan to xylan or A∶X ratio) and the extent to which hemicellulose chains are cross-linked with lignin in plant cell walls. This pattern was consistent between genotypes and during progressive decomposition within each genotype. Moreover, decay rates were controlled by these cross-linkages from the start of decomposition. We also discovered it necessary to divide the Van Soest soluble (labile) fraction of litter C into two pools: one that rapidly decomposed and a second that was more persistent. Simulated microbial production was consistent with recent studies suggesting that more rapidly decomposing materials can generate greater amounts of potentially recalcitrant microbial products despite the rapid loss of litter mass. Sensitivity analyses failed to identify any model parameter that consistently explained a large proportion of model variation, suggesting that feedback controls between litter quality and microbial activity in the reverse Michaelis-Menten approach resulted in stable model behavior. Model extrapolations to an independent set of data, derived from the decomposition of 12 different genotypes of maize roots, averaged within <3% of observed respiration rates and total CO2 efflux over 112 days. PMID:25264895

  12. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  13. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  14. Basic research in evolution and ecology enhances forensics.

    PubMed

    Tomberlin, Jeffery K; Benbow, M Eric; Tarone, Aaron M; Mohr, Rachel M

    2011-02-01

    In 2009, the National Research Council recommended that the forensic sciences strengthen their grounding in basic empirical research to mitigate against criticism and improve accuracy and reliability. For DNA-based identification, this goal was achieved under the guidance of the population genetics community. This effort resulted in DNA analysis becoming the 'gold standard' of the forensic sciences. Elsewhere, we proposed a framework for streamlining research in decomposition ecology, which promotes quantitative approaches to collecting and applying data to forensic investigations involving decomposing human remains. To extend the ecological aspects of this approach, this review focuses on forensic entomology, although the framework can be extended to other areas of decomposition. Published by Elsevier Ltd.

  15. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  16. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W

    This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less

  17. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  18. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  19. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  20. Overlapping Community Detection based on Network Decomposition

    NASA Astrophysics Data System (ADS)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  1. Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation

    PubMed Central

    Froyen, Vicky; Feldman, Jacob; Singh, Manish

    2015-01-01

    We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548

  2. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development.

    PubMed

    Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-11-16

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.

  3. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development

    PubMed Central

    Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-01-01

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968

  4. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  5. Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition.

    PubMed

    Alegre-Cortés, J; Soto-Sánchez, C; Pizá, Á G; Albarracín, A L; Farfán, F D; Felice, C J; Fernández, E

    2016-07-15

    Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Modeling Anaerobic Soil Organic Carbon Decomposition in Arctic Polygon Tundra: Insights into Soil Geochemical Influences on Carbon Mineralization: Modeling Archive

    DOE Data Explorer

    Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David

    2018-06-13

    This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.

  7. Bromamine Decomposition Revisited: A Holistic Approach for Analyzing Acid and Base Catalysis Kinetics.

    PubMed

    Wahman, David G; Speitel, Gerald E; Katz, Lynn E

    2017-11-21

    Chloramine chemistry is complex, with a variety of reactions occurring in series and parallel and many that are acid or base catalyzed, resulting in numerous rate constants. Bromide presence increases system complexity even further with possible bromamine and bromochloramine formation. Therefore, techniques for parameter estimation must address this complexity through thoughtful experimental design and robust data analysis approaches. The current research outlines a rational basis for constrained data fitting using Brønsted theory, application of the microscopic reversibility principle to reversible acid or base catalyzed reactions, and characterization of the relative significance of parallel reactions using fictive product tracking. This holistic approach was used on a comprehensive and well-documented data set for bromamine decomposition, allowing new interpretations of existing data by revealing that a previously published reaction scheme was not robust; it was not able to describe monobromamine or dibromamine decay outside of the conditions for which it was calibrated. The current research's simplified model (3 reactions, 17 constants) represented the experimental data better than the previously published model (4 reactions, 28 constants). A final model evaluation was conducted based on representative drinking water conditions to determine a minimal model (3 reactions, 8 constants) applicable for drinking water conditions.

  8. Crossing Fibers Detection with an Analytical High Order Tensor Decomposition

    PubMed Central

    Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.

    2014-01-01

    Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940

  9. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  10. Density-dependent liquid nitromethane decomposition: molecular dynamics simulations based on ReaxFF.

    PubMed

    Rom, Naomi; Zybin, Sergey V; van Duin, Adri C T; Goddard, William A; Zeiri, Yehuda; Katz, Gil; Kosloff, Ronnie

    2011-09-15

    The decomposition mechanism of hot liquid nitromethane at various compressions was studied using reactive force field (ReaxFF) molecular dynamics simulations. A competition between two different initial thermal decomposition schemes is observed, depending on compression. At low densities, unimolecular C-N bond cleavage is the dominant route, producing CH(3) and NO(2) fragments. As density and pressure rise approaching the Chapman-Jouget detonation conditions (∼30% compression, >2500 K) the dominant mechanism switches to the formation of the CH(3)NO fragment via H-transfer and/or N-O bond rupture. The change in the decomposition mechanism of hot liquid NM leads to a different kinetic and energetic behavior, as well as products distribution. The calculated density dependence of the enthalpy change correlates with the change in initial decomposition reaction mechanism. It can be used as a convenient and useful global parameter for the detection of reaction dynamics. Atomic averaged local diffusion coefficients are shown to be sensitive to the reactions dynamics, and can be used to distinguish between time periods where chemical reactions occur and diffusion-dominated, nonreactive time periods. © 2011 American Chemical Society

  11. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  12. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach

    PubMed Central

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195

  13. Dynamic laser speckle angiography achieved by eigen-decomposition filtering.

    PubMed

    Li, Chenxi; Wang, Ruikang

    2017-06-01

    A new approach is proposed for statistically analysis of laser speckle signals emerged from a living biological tissue based on eigen-decomposition to separate the dynamic speckle signals due to moving blood cells from the static speckle signals due to static tissue components, upon which to achieve angiography of the interrogated tissue in vivo. The proposed approach is tested by imaging mouse ear pinna in vivo, demonstrating its capability of providing detailed microvascular networks with high contrast, and high temporal and spatial resolutions. It is expected to provide further opportunities for laser speckle imaging in the biomedical and clinical applications where microvascular response to certain stimulus or tissue injury is of interest. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Electrostatic similarity of proteins: Application of three dimensional spherical harmonic decomposition

    PubMed Central

    Długosz, Maciej; Trylska, Joanna

    2008-01-01

    We present a method for describing and comparing global electrostatic properties of biomolecules based on the spherical harmonic decomposition of electrostatic potential data. Unlike other approaches our method does not require any prior three dimensional structural alignment. The electrostatic potential, given as a volumetric data set from a numerical solution of the Poisson or Poisson–Boltzmann equation, is represented with descriptors that are rotation invariant. The method can be applied to large and structurally diverse sets of biomolecules enabling to cluster them according to their electrostatic features. PMID:18624502

  15. Ultrasonic technique for imaging tissue vibrations: preliminary results.

    PubMed

    Sikdar, Siddhartha; Beach, Kirk W; Vaezy, Shahram; Kim, Yongmin

    2005-02-01

    We propose an ultrasound (US)-based technique for imaging vibrations in the blood vessel walls and surrounding tissue caused by eddies produced during flow through narrowed or punctured arteries. Our approach is to utilize the clutter signal, normally suppressed in conventional color flow imaging, to detect and characterize local tissue vibrations. We demonstrate the feasibility of visualizing the origin and extent of vibrations relative to the underlying anatomy and blood flow in real-time and their quantitative assessment, including measurements of the amplitude, frequency and spatial distribution. We present two signal-processing algorithms, one based on phase decomposition and the other based on spectral estimation using eigen decomposition for isolating vibrations from clutter, blood flow and noise using an ensemble of US echoes. In simulation studies, the computationally efficient phase-decomposition method achieved 96% sensitivity and 98% specificity for vibration detection and was robust to broadband vibrations. Somewhat higher sensitivity (98%) and specificity (99%) could be achieved using the more computationally intensive eigen decomposition-based algorithm. Vibration amplitudes as low as 1 mum were measured accurately in phantom experiments. Real-time tissue vibration imaging at typical color-flow frame rates was implemented on a software-programmable US system. Vibrations were studied in vivo in a stenosed femoral bypass vein graft in a human subject and in a punctured femoral artery and incised spleen in an animal model.

  16. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    PubMed

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A system decomposition approach to the design of functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, Tyrone; Trinh, Hieu

    2014-09-01

    This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.

  18. Soil organic matter decomposition follows plant productivity response to sea-level rise

    NASA Astrophysics Data System (ADS)

    Mueller, Peter; Jensen, Kai; Megonigal, James Patrick

    2015-04-01

    The accumulation of soil organic matter (SOM) is an important mechanism for many tidal wetlands to keep pace with sea-level rise. SOM accumulation is governed by the rates of production and decomposition of organic matter. While plant productivity responses to sea-level rise are well understood, far less is known about the response of SOM decomposition to accelerated sea-level rise. Here we quantified the effects of sea-level rise on SOM decomposition by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian Global Change Research Wetland, a micro tidal brackish marsh in Maryland, US. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated using a stable carbon isotope approach. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to varying flood duration over a 35 cm range in surface elevation in unplanted mesocoms. In the presence of plants, decomposition rates were strongly and positively related to aboveground biomass (p≤0.01, R2≥0.59). We conclude that rates of soil carbon loss through decomposition are driven by plant responses to sea level in this intensively studied tidal marsh. If our result applies more generally to tidal wetlands, it has important implications for modeling carbon sequestration and marsh accretion in response to accelerated sea-level rise.

  19. SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, R; Carson, J

    2014-06-15

    Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6)more » or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.« less

  20. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  1. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less

  2. The Excursion set approach: Stratonovich approximation and Cholesky decomposition

    NASA Astrophysics Data System (ADS)

    Nikakhtar, Farnik; Ayromlou, Mohammadreza; Baghram, Shant; Rahvar, Sohrab; Tabar, M. Reza Rahimi; Sheth, Ravi K.

    2018-05-01

    The excursion set approach is a framework for estimating how the number density of nonlinear structures in the cosmic web depends on the expansion history of the universe and the nature of gravity. A key part of the approach is the estimation of the first crossing distribution of a suitably chosen barrier by random walks having correlated steps: The shape of the barrier is determined by the physics of nonlinear collapse, and the correlations between steps by the nature of the initial density fluctuation field. We describe analytic and numerical methods for calculating such first up-crossing distributions. While the exact solution can be written formally as an infinite series, we show how to approximate it efficiently using the Stratonovich approximation. We demonstrate its accuracy using Monte-Carlo realizations of the walks, which we generate using a novel Cholesky-decomposition based algorithm, which is significantly faster than the algorithm that is currently in the literature.

  3. Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach

    NASA Astrophysics Data System (ADS)

    Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng

    2009-02-01

    Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.

  4. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  5. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  6. FCDECOMP: decomposition of metabolic networks based on flux coupling relations.

    PubMed

    Rezvan, Abolfazl; Marashi, Sayed-Amir; Eslahchi, Changiz

    2014-10-01

    A metabolic network model provides a computational framework to study the metabolism of a cell at the system level. Due to their large sizes and complexity, rational decomposition of these networks into subsystems is a strategy to obtain better insight into the metabolic functions. Additionally, decomposing metabolic networks paves the way to use computational methods that will be otherwise very slow when run on the original genome-scale network. In the present study, we propose FCDECOMP decomposition method based on flux coupling relations (FCRs) between pairs of reaction fluxes. This approach utilizes a genetic algorithm (GA) to obtain subsystems that can be analyzed in isolation, i.e. without considering the reactions of the original network in the analysis. Therefore, we propose that our method is useful for discovering biologically meaningful modules in metabolic networks. As a case study, we show that when this method is applied to the metabolic networks of barley seeds and yeast, the modules are in good agreement with the biological compartments of these networks.

  7. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  8. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  9. Parallel text rendering by a PostScript interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kritskii, S.P.; Zastavnoi, B.A.

    1994-11-01

    The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less

  10. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  11. Retrieval of the non-depolarizing components of depolarizing Mueller matrices by using symmetry conditions and least squares minimization

    NASA Astrophysics Data System (ADS)

    Kuntman, Ertan; Canillas, Adolf; Arteaga, Oriol

    2017-11-01

    Experimental Mueller matrices contain certain amount of uncertainty in their elements and these uncertainties can create difficulties for decomposition methods based on analytic solutions. In an earlier paper [1], we proposed a decomposition method for depolarizing Mueller matrices by using certain symmetry conditions. However, because of the experimental error, that method creates over-determined systems with non-unique solutions. Here we propose to use least squares minimization approach in order to improve the accuracy of our results. In this method, we are taking into account the number of independent parameters of the corresponding symmetry and the rank constraints on the component matrices to decide on our fitting model. This approach is illustrated with experimental Mueller matrices that include material media with different Mueller symmetries.

  12. U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)

    EPA Science Inventory

    The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...

  13. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  14. A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics

    PubMed Central

    Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055

  15. Intrinsic Decomposition of The Stretch Tensor for Fibrous Media

    NASA Astrophysics Data System (ADS)

    Kellermann, David C.

    2010-05-01

    This paper presents a novel mechanism for the description of fibre reorientation based on the decomposition of the stretch tensor according to a given material's intrinsic constitutive properties. This approach avoids the necessity for fibre directors, structural tensors or specialised model such as the ideal fibre reinforced model, which are commonly applied to the analysis of fibre kinematics in the finite deformation of fibrous media for biomechanical problems. The proposed approach uses Intrinsic-Field Tensors (IFTs) that build upon the linear orthotropic theory presented in a previous paper entitled Strongly orthotropic continuum mechanics and finite element treatment. The intrinsic decomposition of the stretch tensor therein provides superior capacity to represent the intermediary kinematics driven by finite orthotropic ratios, where the benefits are predominantly expressed in cases of large deformation as is typical in the biomechanical studies. Satisfaction of requirements such as Material Frame-Indifference (MFI) and Euclidean objectivity are demonstrated here—these factors being necessary for the proposed IFTs to be valid tensorial quantities. The resultant tensors, initially for the simplest case of linear elasticity, are able to describe the same fibre reorientation as would the contemporary approaches such as with use of structural tensors and the like, while additionally being capable of showing results intermediary to classical isotropy and the infinitely orthotropic representations. This intermediary case is previously unreported.

  16. Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan

    NASA Astrophysics Data System (ADS)

    Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.

    2010-12-01

    Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in structure of organic matter. Because coarse woody debris has shape of cylinder, microbes decompose from the surface of it. Thus, respiration rate of coarse woody debris is lower than that of leaf and fine root litter. Based on this result, we modified Roth-C model and estimate soil carbon accumulation rate in recent years. Based on the results from a soil survey, the forest soil stored 30tC ha-1 in O and A horizon. We can evaluate the modified model using this result. NEP can be expressed as the annual increment in the plant biomass plus soil carbon pools. So if we can estimate NEP using this approach, then we can evaluate NEP estimated by micrometeorological and ecological approaches and reduce uncertainty of NEP estimation.

  17. Decomposition of energetic chemicals contaminated with iron or stainless steel.

    PubMed

    Chervin, Sima; Bodman, Glenn T; Barnhart, Richard W

    2006-03-17

    Contamination of chemicals or reaction mixtures with iron or stainless steel is likely to take place during chemical processing. If energetic and thermally unstable chemicals are involved in a manufacturing process, contamination with iron or stainless steel can impact the decomposition characteristics of these chemicals and, subsequently, the safety of the processes, and should be investigated. The goal of this project was to undertake a systematic approach to study the impact of iron or stainless steel contamination on the decomposition characteristics of different chemical classes. Differential scanning calorimetry (DSC) was used to study the decomposition reaction by testing each chemical pure, and in mixtures with iron and stainless steel. The following classes of energetic chemicals were investigated: nitrobenzenes, tetrazoles, hydrazines, hydroxylamines and oximes, sulfonic acid derivatives and monomers. The following non-energetic groups were investigated for contributing effects: halogens, hydroxyls, amines, amides, nitriles, sulfonic acid esters, carbonyl halides and salts of hydrochloric acid. Based on the results obtained, conclusions were drawn regarding the sensitivity of the decomposition reaction to contamination with iron and stainless steel for the chemical classes listed above. It was demonstrated that the most sensitive classes are hydrazines and hydroxylamines/oximes. Contamination of these chemicals with iron or stainless steel not only destabilizes them, leading to decomposition at significantly lower temperatures, but also sometimes causes increased severity of the decomposition. The sensitivity of nitrobenzenes to contamination with iron or stainless steel depended upon the presence of other contributing groups: the presence of such groups as acid chlorides or chlorine/fluorine significantly increased the effect of contamination on decomposition characteristics of nitrobenzenes. The decomposition of sulfonic acid derivatives and tetrazoles was not impacted by presence of iron or stainless steel.

  18. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    PubMed

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  19. Dictionary-Based Tensor Canonical Polyadic Decomposition

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  20. The nexus between geopolitical uncertainty and crude oil markets: An entropy-based wavelet analysis

    NASA Astrophysics Data System (ADS)

    Uddin, Gazi Salah; Bekiros, Stelios; Ahmed, Ali

    2018-04-01

    The global financial crisis and the subsequent geopolitical turbulence in energy markets have brought increased attention to the proper statistical modeling especially of the crude oil markets. In particular, we utilize a time-frequency decomposition approach based on wavelet analysis to explore the inherent dynamics and the casual interrelationships between various types of geopolitical, economic and financial uncertainty indices and oil markets. Via the introduction of a mixed discrete-continuous multiresolution analysis, we employ the entropic criterion for the selection of the optimal decomposition level of a MODWT as well as the continuous-time coherency and phase measures for the detection of business cycle (a)synchronization. Overall, a strong heterogeneity in the revealed interrelationships is detected over time and across scales.

  1. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  2. Integration of Neuroimaging and Microarray Datasets through Mapping and Model-Theoretic Semantic Decomposition of Unstructured Phenotypes

    PubMed Central

    Pantazatos, Spiro P.; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A.

    2009-01-01

    An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50), and precision of the semantic mapping between these terms across datasets was 98% (n = 100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets. PMID:20495688

  3. Evolution-Based Functional Decomposition of Proteins

    PubMed Central

    Rivoire, Olivier; Reynolds, Kimberly A.; Ranganathan, Rama

    2016-01-01

    The essential biological properties of proteins—folding, biochemical activities, and the capacity to adapt—arise from the global pattern of interactions between amino acid residues. The statistical coupling analysis (SCA) is an approach to defining this pattern that involves the study of amino acid coevolution in an ensemble of sequences comprising a protein family. This approach indicates a functional architecture within proteins in which the basic units are coupled networks of amino acids termed sectors. This evolution-based decomposition has potential for new understandings of the structural basis for protein function. To facilitate its usage, we present here the principles and practice of the SCA and introduce new methods for sector analysis in a python-based software package (pySCA). We show that the pattern of amino acid interactions within sectors is linked to the divergence of functional lineages in a multiple sequence alignment—a model for how sector properties might be differentially tuned in members of a protein family. This work provides new tools for studying proteins and for generally testing the concept of sectors as the principal units of function and adaptive variation. PMID:27254668

  4. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  5. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  6. A New Domain Decomposition Approach for the Gust Response Problem

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.

    2002-01-01

    A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.

  7. Initial mechanisms for the unimolecular decomposition of electronically excited bisfuroxan based energetic materials.

    PubMed

    Yuan, Bing; Bernstein, Elliot R

    2017-01-07

    Unimolecular decomposition of energetic molecules, 3,3'-diamino-4,4'-bisfuroxan (labeled as A) and 4,4'-diamino-3,3'-bisfuroxan (labeled as B), has been explored via 226/236 nm single photon laser excitation/decomposition. These two energetic molecules, subsequent to UV excitation, create NO as an initial decomposition product at the nanosecond excitation energies (5.0-5.5 eV) with warm vibrational temperature (1170 ± 50 K for A, 1400 ± 50 K for B) and cold rotational temperature (<55 K). Initial decomposition mechanisms for these two electronically excited, isolated molecules are explored at the complete active space self-consistent field (CASSCF(12,12)/6-31G(d)) level with and without MP2 correction. Potential energy surface calculations illustrate that conical intersections play an essential role in the calculated decomposition mechanisms. Based on experimental observations and theoretical calculations, NO product is released through opening of the furoxan ring: ring opening can occur either on the S 1 excited or S 0 ground electronic state. The reaction path with the lowest energetic barrier is that for which the furoxan ring opens on the S 1 state via the breaking of the N1-O1 bond. Subsequently, the molecule moves to the ground S 0 state through related ring-opening conical intersections, and an NO product is formed on the ground state surface with little rotational excitation at the last NO dissociation step. For the ground state ring opening decomposition mechanism, the N-O bond and C-N bond break together in order to generate dissociated NO. With the MP2 correction for the CASSCF(12,12) surface, the potential energies of molecules with dissociated NO product are in the range from 2.04 to 3.14 eV, close to the theoretical result for the density functional theory (B3LYP) and MP2 methods. The CASMP2(12,12) corrected approach is essential in order to obtain a reasonable potential energy surface that corresponds to the observed decomposition behavior of these molecules. Apparently, highly excited states are essential for an accurate representation of the kinetics and dynamics of excited state decomposition of both of these bisfuroxan energetic molecules. The experimental vibrational temperatures of NO products of A and B are about 800-1000 K lower than previously studied energetic molecules with NO as a decomposition product.

  8. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  9. Hierarchical prediction of industrial water demand based on refined Laspeyres decomposition analysis.

    PubMed

    Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang

    2017-12-01

    A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.

  10. Assessments on GOCE-based Gravity Field Model Comparisons with Terrestrial Data Using Wavelet Decomposition and Spectral Enhancement Approaches

    NASA Astrophysics Data System (ADS)

    Erol, Serdar; Serkan Isık, Mustafa; Erol, Bihter

    2016-04-01

    The recent Earth gravity field satellite missions data lead significant improvement in Global Geopotential Models in terms of both accuracy and resolution. However the improvement in accuracy is not the same everywhere in the Earth and therefore quantifying the level of improvement locally is necessary using the independent data. The validations of the level-3 products from the gravity field satellite missions, independently from the estimation procedures of these products, are possible using various arbitrary data sets, as such the terrestrial gravity observations, astrogeodetic vertical deflections, GPS/leveling data, the stationary sea surface topography. Quantifying the quality of the gravity field functionals via recent products has significant importance for determination of the regional geoid modeling, base on the satellite and terrestrial data fusion with an optimal algorithm, beside the statistical reporting the improvement rates depending on spatial location. In the validations, the errors and the systematic differences between the data and varying spectral content of the compared signals should be considered in order to have comparable results. In this manner this study compares the performance of Wavelet decomposition and spectral enhancement techniques in validation of the GOCE/GRACE based Earth gravity field models using GPS/leveling and terrestrial gravity data in Turkey. The terrestrial validation data are filtered using Wavelet decomposition technique and the numerical results from varying levels of decomposition are compared with the results which are derived using the spectral enhancement approach with contribution of an ultra-high resolution Earth gravity field model. The tests include the GO-DIR-R5, GO-TIM-R5, GOCO05S, EIGEN-6C4 and EGM2008 global models. The conclusion discuss the superiority and drawbacks of both concepts as well as reporting the performance of tested gravity field models with an estimate of their contribution to modeling the geoid in Turkish territory.

  11. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  12. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  13. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  14. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  15. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  16. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  17. Mechanism of thermal decomposition of K2FeO4 and BaFeO4: A review

    NASA Astrophysics Data System (ADS)

    Sharma, Virender K.; Machala, Libor

    2016-12-01

    This paper presents thermal decomposition of potassium ferrate(VI) (K2FeO4) and barium ferrate(VI) (BaFeO4) in air and nitrogen atmosphere. Mössbauer spectroscopy and nuclear forward scattering (NFS) synchrotron radiation approaches are reviewed to advance understanding of electron-transfer processes involved in reduction of ferrate(VI) to Fe(III) phases. Direct evidences of Fe V and Fe IV as intermediate iron species using the applied techniques are given. Thermal decomposition of K2FeO4 involved Fe V, Fe IV, and K3FeO3 as intermediate species while BaFeO3 (i.e. Fe IV) was the only intermediate species during the decomposition of BaFeO4. Nature of ferrite species, formed as final Fe(III) species, of thermal decomposition of K2FeO4 and BaFeO4 under different conditions are evaluated. Steps of the mechanisms of thermal decomposition of ferrate(VI), which reasonably explained experimental observations of applied approaches in conjunction with thermal and surface techniques, are summarized.

  18. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  19. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  20. Single-Scale Fusion: An Effective Approach to Merging Images.

    PubMed

    Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C

    2017-01-01

    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

  1. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  2. Effect decomposition in the presence of an exposure-induced mediator-outcome confounder

    PubMed Central

    VanderWeele, Tyler J.; Vansteelandt, Stijn; Robins, James M.

    2014-01-01

    Methods from causal mediation analysis have generalized the traditional approach to direct and indirect effects in the epidemiologic and social science literature by allowing for interaction and non-linearities. However, the methods from the causal inference literature have themselves been subject to a major limitation in that the so-called natural direct and indirect effects that are employed are not identified from data whenever there is a variable that is affected by the exposure, which also confounds the relationship between the mediator and the outcome. In this paper we describe three alternative approaches to effect decomposition that give quantities that can be interpreted as direct and indirect effects, and that can be identified from data even in the presence of an exposure-induced mediator-outcome confounder. We describe a simple weighting-based estimation method for each of these three approaches, illustrated with data from perinatal epidemiology. The methods described here can shed insight into pathways and questions of mediation even when an exposure-induced mediator-outcome confounder is present. PMID:24487213

  3. Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*

    DOE PAGES

    Bank, R.; Falgout, R. D.; Jones, T.; ...

    2015-10-29

    In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less

  4. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  5. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  6. Autonomous movement of silica and glass micro-objects based on a catalytic molecular propulsion system.

    PubMed

    Stock, Christoph; Heureux, Nicolas; Browne, Wesley R; Feringa, Ben L

    2008-01-01

    A general approach for the easy functionalization of bare silica and glass surfaces with a synthetic manganese catalyst is reported. Decomposition of H(2)O(2) by this dinuclear metallic center into H(2)O and O(2) induced autonomous movement of silica microparticles and glass micro-sized fibers. Although several mechanisms have been proposed to rationalise movement of particles driven by H(2)O(2) decomposition to O(2) and water (recoil from O(2) bubbles, ([36,45]) interfacial tension gradient([37-42]), it is apparent in the present system that ballistic movement is due to the growth of O(2) bubbles.

  7. Decomposing phenotype descriptions for the human skeletal phenome.

    PubMed

    Groza, Tudor; Hunter, Jane; Zankl, Andreas

    2013-01-01

    Over the course of the last few years there has been a significant amount of research performed on ontology-based formalization of phenotype descriptions. The intrinsic value and knowledge captured within such descriptions can only be expressed by taking advantage of their inner structure that implicitly combines qualities and anatomical entities. We present a meta-model (the Phenotype Fragment Ontology) and a processing pipeline that enable together the automatic decomposition and conceptualization of phenotype descriptions for the human skeletal phenome. We use this approach to showcase the usefulness of the generic concept of phenotype decomposition by performing an experimental study on all skeletal phenotype concepts defined in the Human Phenotype Ontology.

  8. Analysis of typical fault-tolerant architectures using HARP

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl

    1987-01-01

    Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.

  9. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  10. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE PAGES

    Li, Ruipeng; Saad, Yousef

    2017-08-01

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  11. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  12. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    PubMed

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  13. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ruipeng; Saad, Yousef

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  14. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  15. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.

    1986-01-01

    A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  16. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  17. A novel key-frame extraction approach for both video summary and video index.

    PubMed

    Lei, Shaoshuai; Xie, Gang; Yan, Gaowei

    2014-01-01

    Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.

  18. Boreal soil carbon dynamics under a changing climate: a model inversion approach

    Treesearch

    Zhaosheng Fan; Jason C. Neff; Jennifer W. Harden; Kimberly P. Wickland

    2008-01-01

    Several fundamental but important factors controlling the feedback of boreal organic carbon (OC) to climate change were examined using a mechanistic model of soil OC dynamics, including the combined effects of temperature and moisture on the decomposition of OC and the factors controlling carbon quality and decomposition with depth. To estimate decomposition rates and...

  19. Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kautz, Elizabeth J.; Jana, Saumyadeep; Devaraj, Arun

    2017-07-31

    This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).

  20. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  1. WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.

    PubMed

    Debnath, Avijit; Bhattacharjee, Nairita

    2018-05-01

    SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.

  2. Bi-dimensional empirical mode decomposition based fringe-like pattern suppression in polarization interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi

    2018-01-01

    Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.

  3. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

  4. Numerical simulation of tonal fan noise of computers and air conditioning systems

    NASA Astrophysics Data System (ADS)

    Aksenov, A. A.; Gavrilyuk, V. N.; Timushev, S. F.

    2016-07-01

    Current approaches to fan noise simulation are mainly based on the Lighthill equation and socalled aeroacoustic analogy, which are also based on the transformed Lighthill equation, such as the wellknown FW-H equation or the Kirchhoff theorem. A disadvantage of such methods leading to significant modeling errors is associated with incorrect solution of the decomposition problem, i.e., separation of acoustic and vortex (pseudosound) modes in the area of the oscillation source. In this paper, we propose a method for tonal noise simulation based on the mesh solution of the Helmholtz equation for the Fourier transform of pressure perturbation with boundary conditions in the form of the complex impedance. A noise source is placed on the surface surrounding each fan rotor. The acoustic fan power is determined by the acoustic-vortex method, which ensures more accurate decomposition and determination of the pressure pulsation amplitudes in the near field of the fan.

  5. Glove-based approach to online signature verification.

    PubMed

    Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A

    2008-06-01

    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.

  6. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Kolb, M. A.

    1987-01-01

    A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  7. Structural analysis and design of multivariable control systems: An algebraic approach

    NASA Technical Reports Server (NTRS)

    Tsay, Yih Tsong; Shieh, Leang-San; Barnett, Stephen

    1988-01-01

    The application of algebraic system theory to the design of controllers for multivariable (MV) systems is explored analytically using an approach based on state-space representations and matrix-fraction descriptions. Chapters are devoted to characteristic lambda matrices and canonical descriptions of MIMO systems; spectral analysis, divisors, and spectral factors of nonsingular lambda matrices; feedback control of MV systems; and structural decomposition theories and their application to MV control systems.

  8. Linear dynamical modes as new variables for data-driven ENSO forecast

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen

    2018-05-01

    A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.

  9. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis

    PubMed Central

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-01-01

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526

  10. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis.

    PubMed

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-04-21

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spogen, L.R.; Cleland, L.L.

    An approach to the development of performance based regulations (PBR's) is described. Initially, a framework is constructed that consists of a function hierarchy and associated measures. The function at the top of the hierarchy is described in terms of societal objectives. Decomposition of this function into subordinate functions and their subsequent decompositions yield the function hierarchy. ''Bottom'' functions describe the roles of system components. When measures are identified for the performance of each function and means of aggregating performances to higher levels are established, the framework may be employed for developing PBR's. Consideration of system flexibility and performance uncertainty guidemore » in determining the hierarchical level at which regulations are formulated. Ease of testing compliance is also a factor. To show the viability of the approach, the framework developed by Lawrence Livermore Laboratory for the Nuclear Regulatory Commission for evaluation of material control systems at fixed facilities is presented.« less

  12. CrossTalk: The Journal of Defense Software Engineering. Volume 27, Number 1, January/February 2014

    DTIC Science & Technology

    2014-02-01

    deficit in trustworthiness and will permit analysis on how this deficit needs to be overcome. This analysis will help identify adaptations that are...approaches to trustworthy analysis split into two categories: product-based and process-based. Product-based techniques [9] identify factors that...Criticalities may also be assigned to decompositions and contributions. 5. Evaluation and analysis : in this task the propagation rules of the NFR

  13. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    NASA Astrophysics Data System (ADS)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  14. Measuring and decomposing socioeconomic inequality in healthcare delivery: A microsimulation approach with application to the Palestinian conflict-affected fragile setting.

    PubMed

    Abu-Zaineh, Mohammad; Mataria, Awad; Moatti, Jean-Paul; Ventelou, Bruno

    2011-01-01

    Socioeconomic-related inequalities in healthcare delivery have been extensively studied in developed countries, using standard linear models of decomposition. This paper seeks to assess equity in healthcare delivery in the particular context of the occupied Palestinian territory: the West Bank and the Gaza Strip, using a new method of decomposition based on microsimulations. Besides avoiding the 'unavoidable price' of linearity restriction that is imposed by the standard methods of decomposition, the microsimulation-based decomposition enables to circumvent the potentially contentious role of heterogeneity in behaviours and to better disentangle the various sources driving inequality in healthcare utilisation. Results suggest that the worse-off do have a disproportinately greater need for all levels of care. However with the exception of primary-level, utilisation of all levels of care appears to be significantly higher for the better-off. The microsimulation method has made it possible to identify the contributions of factors driving such pro-rich patterns. While much of the inequality in utilisation appears to be caused by the prevailing socioeconomic inequalities, detailed analysis attributes a non-trivial part (circa 30% of inequalities) to heterogeneity in healthcare-seeking behaviours across socioeconomic groups of the population. Several policy recommendations for improving equity in healthcare delivery in the occupied Palestinian territory are proposed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Cognitive workload reduction in hospital information systems : Decision support for order set optimization.

    PubMed

    Gartner, Daniel; Zhang, Yiye; Padman, Rema

    2018-06-01

    Order sets are a critical component in hospital information systems that are expected to substantially reduce physicians' physical and cognitive workload and improve patient safety. Order sets represent time interval-clustered order items, such as medications prescribed at hospital admission, that are administered to patients during their hospital stay. In this paper, we develop a mathematical programming model and an exact and a heuristic solution procedure with the objective of minimizing physicians' cognitive workload associated with prescribing order sets. Furthermore, we provide structural insights into the problem which lead us to a valid lower bound on the order set size. In a case study using order data on Asthma patients with moderate complexity from a major pediatric hospital, we compare the hospital's current solution with the exact and heuristic solutions on a variety of performance metrics. Our computational results confirm our lower bound and reveal that using a time interval decomposition approach substantially reduces computation times for the mathematical program, as does a K -means clustering based decomposition approach which, however, does not guarantee optimality because it violates the lower bound. The results of comparing the mathematical program with the current order set configuration in the hospital indicates that cognitive workload can be reduced by about 20.2% by allowing 1 to 5 order sets, respectively. The comparison of the K -means based decomposition with the hospital's current configuration reveals a cognitive workload reduction of about 19.5%, also by allowing 1 to 5 order sets, respectively. We finally provide a decision support system to help practitioners analyze the current order set configuration, the results of the mathematical program and the heuristic approach.

  16. Ranking of critical species to preserve the functionality of mutualistic networks using the k-core decomposition

    PubMed Central

    García-Algarra, Javier; Pastor, Juan Manuel; Iriondo, José María

    2017-01-01

    Background Network analysis has become a relevant approach to analyze cascading species extinctions resulting from perturbations on mutualistic interactions as a result of environmental change. In this context, it is essential to be able to point out key species, whose stability would prevent cascading extinctions, and the consequent loss of ecosystem function. In this study, we aim to explain how the k-core decomposition sheds light on the understanding the robustness of bipartite mutualistic networks. Methods We defined three k-magnitudes based on the k-core decomposition: k-radius, k-degree, and k-risk. The first one, k-radius, quantifies the distance from a node to the innermost shell of the partner guild, while k-degree provides a measure of centrality in the k-shell based decomposition. k-risk is a way to measure the vulnerability of a network to the loss of a particular species. Using these magnitudes we analyzed 89 mutualistic networks involving plant pollinators or seed dispersers. Two static extinction procedures were implemented in which k-degree and k-risk were compared against other commonly used ranking indexes, as for example MusRank, explained in detail in Material and Methods. Results When extinctions take place in both guilds, k-risk is the best ranking index if the goal is to identify the key species to preserve the giant component. When species are removed only in the primary class and cascading extinctions are measured in the secondary class, the most effective ranking index to identify the key species to preserve the giant component is k-degree. However, MusRank index was more effective when the goal is to identify the key species to preserve the greatest species richness in the second class. Discussion The k-core decomposition offers a new topological view of the structure of mutualistic networks. The new k-radius, k-degree and k-risk magnitudes take advantage of its properties and provide new insight into the structure of mutualistic networks. The k-risk and k-degree ranking indexes are especially effective approaches to identify key species to preserve when conservation practitioners focus on the preservation of ecosystem functionality over species richness. PMID:28533969

  17. Ranking of critical species to preserve the functionality of mutualistic networks using the k-core decomposition.

    PubMed

    García-Algarra, Javier; Pastor, Juan Manuel; Iriondo, José María; Galeano, Javier

    2017-01-01

    Network analysis has become a relevant approach to analyze cascading species extinctions resulting from perturbations on mutualistic interactions as a result of environmental change. In this context, it is essential to be able to point out key species, whose stability would prevent cascading extinctions, and the consequent loss of ecosystem function. In this study, we aim to explain how the k -core decomposition sheds light on the understanding the robustness of bipartite mutualistic networks. We defined three k -magnitudes based on the k -core decomposition: k -radius, k -degree, and k -risk. The first one, k -radius, quantifies the distance from a node to the innermost shell of the partner guild, while k -degree provides a measure of centrality in the k -shell based decomposition. k -risk is a way to measure the vulnerability of a network to the loss of a particular species. Using these magnitudes we analyzed 89 mutualistic networks involving plant pollinators or seed dispersers. Two static extinction procedures were implemented in which k -degree and k -risk were compared against other commonly used ranking indexes, as for example MusRank, explained in detail in Material and Methods. When extinctions take place in both guilds, k -risk is the best ranking index if the goal is to identify the key species to preserve the giant component. When species are removed only in the primary class and cascading extinctions are measured in the secondary class, the most effective ranking index to identify the key species to preserve the giant component is k -degree. However, MusRank index was more effective when the goal is to identify the key species to preserve the greatest species richness in the second class. The k -core decomposition offers a new topological view of the structure of mutualistic networks. The new k -radius, k -degree and k -risk magnitudes take advantage of its properties and provide new insight into the structure of mutualistic networks. The k -risk and k -degree ranking indexes are especially effective approaches to identify key species to preserve when conservation practitioners focus on the preservation of ecosystem functionality over species richness.

  18. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  19. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  20. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species

    PubMed Central

    Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.

    2016-01-01

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461

  1. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species.

    PubMed

    Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D

    2016-10-04

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.

  2. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  3. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  4. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  5. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  6. Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior

    PubMed Central

    Bridwell, David A.; Cavanagh, James F.; Collins, Anne G. E.; Nunez, Michael D.; Srinivasan, Ramesh; Stober, Sebastian; Calhoun, Vince D.

    2018-01-01

    Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function. PMID:29632480

  7. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  8. Fetal ECG extraction using independent component analysis by Jade approach

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  9. Climate fails to predict wood decomposition at regional scales

    Treesearch

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  10. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  11. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  12. Pi2 detection using Empirical Mode Decomposition (EMD)

    NASA Astrophysics Data System (ADS)

    Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz

    2017-04-01

    Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.

  13. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  14. Carbon emissions from decomposition of fire-killed trees following a large wildfire in Oregon, United States

    NASA Astrophysics Data System (ADS)

    Campbell, John L.; Fontaine, Joseph B.; Donato, Daniel C.

    2016-03-01

    A key uncertainty concerning the effect of wildfire on carbon dynamics is the rate at which fire-killed biomass (e.g., dead trees) decays and emits carbon to the atmosphere. We used a ground-based approach to compute decomposition of forest biomass killed, but not combusted, in the Biscuit Fire of 2002, an exceptionally large wildfire that burned over 200,000 ha of mixed conifer forest in southwestern Oregon, USA. A combination of federal inventory data and supplementary ground measurements afforded the estimation of fire-caused mortality and subsequent 10 year decomposition for several functionally distinct carbon pools at 180 independent locations in the burn area. Decomposition was highest for fire-killed leaves and fine roots and lowest for large-diameter wood. Decomposition rates varied somewhat among tree species and were only 35% lower for trees still standing than for trees fallen at the time of the fire. We estimate a total of 4.7 Tg C was killed but not combusted in the Biscuit Fire, 85% of which remains 10 years after. Biogenic carbon emissions from fire-killed necromass were estimated to be 1.0, 0.6, and 0.4 Mg C ha-1 yr-1 at 1, 10, and 50 years after the fire, respectively; compared to the one-time pyrogenic emission of nearly 17 Mg C ha-1.

  15. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    NASA Astrophysics Data System (ADS)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten

    2017-11-01

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.

  16. Phase-based motion magnification video for monitoring of vital signals using the Hermite transform

    NASA Astrophysics Data System (ADS)

    Brieva, Jorge; Moya-Albor, Ernesto

    2017-11-01

    In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  18. Who is who in litter decomposition? Metaproteomics reveals major microbial players and their biogeochemical functions

    PubMed Central

    Schneider, Thomas; Keiblinger, Katharina M; Schmid, Emanuel; Sterflinger-Gleixner, Katja; Ellersdorfer, Günther; Roschitzki, Bernd; Richter, Andreas; Eberl, Leo; Zechmeister-Boltenstern, Sophie; Riedel, Kathrin

    2012-01-01

    Leaf-litter decomposition is a central process in carbon cycling; however, our knowledge about the microbial regulation of this process is still scarce. Metaproteomics allows us to link the abundance and activity of enzymes during nutrient cycling to their phylogenetic origin based on proteins, the ‘active building blocks' in the system. Moreover, we employed metaproteomics to investigate the influence of environmental factors and nutrients on the decomposer structure and function during beech litter decomposition. Litter was collected at forest sites in Austria with different litter nutrient content. Proteins were analyzed by 1-D-SDS-PAGE followed by liquid-chromatography and tandem mass-spectrometry. Mass spectra were assigned to phylogenetic and functional groups by a newly developed bioinformatics workflow, assignments being validated by complementary approaches. We provide evidence that the litter nutrient content and the stoichiometry of C:N:P affect the decomposer community structure and activity. Fungi were found to be the main producers of extracellular hydrolytic enzymes, with no bacterial hydrolases being detected by our metaproteomics approach. Detailed investigation of microbial succession suggests that it is influenced by litter nutrient content. Microbial activity was stimulated at higher litter nutrient contents via a higher abundance and activity of extracellular enzymes. PMID:22402400

  19. The generalized Hill model: A kinematic approach towards active muscle contraction

    NASA Astrophysics Data System (ADS)

    Göktepe, Serdar; Menzel, Andreas; Kuhl, Ellen

    2014-12-01

    Excitation-contraction coupling is the physiological process of converting an electrical stimulus into a mechanical response. In muscle, the electrical stimulus is an action potential and the mechanical response is active contraction. The classical Hill model characterizes muscle contraction though one contractile element, activated by electrical excitation, and two non-linear springs, one in series and one in parallel. This rheology translates into an additive decomposition of the total stress into a passive and an active part. Here we supplement this additive decomposition of the stress by a multiplicative decomposition of the deformation gradient into a passive and an active part. We generalize the one-dimensional Hill model to the three-dimensional setting and constitutively define the passive stress as a function of the total deformation gradient and the active stress as a function of both the total deformation gradient and its active part. We show that this novel approach combines the features of both the classical stress-based Hill model and the recent active-strain models. While the notion of active stress is rather phenomenological in nature, active strain is micro-structurally motivated, physically measurable, and straightforward to calibrate. We demonstrate that our model is capable of simulating excitation-contraction coupling in cardiac muscle with its characteristic features of wall thickening, apical lift, and ventricular torsion.

  20. Density-based Energy Decomposition Analysis for Intermolecular Interactions with Variationally Determined Intermediate State Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Q.; Ayers, P.W.; Zhang, Y.

    2009-10-28

    The first purely density-based energy decomposition analysis (EDA) for intermolecular binding is developed within the density functional theory. The most important feature of this scheme is to variationally determine the frozen density energy, based on a constrained search formalism and implemented with the Wu-Yang algorithm [Q. Wu and W. Yang, J. Chem. Phys. 118, 2498 (2003) ]. This variational process dispenses with the Heitler-London antisymmetrization of wave functions used in most previous methods and calculates the electrostatic and Pauli repulsion energies together without any distortion of the frozen density, an important fact that enables a clean separation of these twomore » terms from the relaxation (i.e., polarization and charge transfer) terms. The new EDA also employs the constrained density functional theory approach [Q. Wu and T. Van Voorhis, Phys. Rev. A 72, 24502 (2005)] to separate out charge transfer effects. Because the charge transfer energy is based on the density flow in real space, it has a small basis set dependence. Applications of this decomposition to hydrogen bonding in the water dimer and the formamide dimer show that the frozen density energy dominates the binding in these systems, consistent with the noncovalent nature of the interactions. A more detailed examination reveals how the interplay of electrostatics and the Pauli repulsion determines the distance and angular dependence of these hydrogen bonds.« less

  1. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.

  2. Investigation of automated task learning, decomposition and scheduling

    NASA Technical Reports Server (NTRS)

    Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.

    1990-01-01

    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.

  3. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  4. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  5. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  6. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  7. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  8. A reduced-order model for compressible flows with buffeting condition using higher order dynamic mode decomposition with a mode selection criterion

    NASA Astrophysics Data System (ADS)

    Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei

    2018-01-01

    This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.

  9. SDE decomposition and A-type stochastic interpretation in nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Yuan, Ruoshi; Tang, Ying; Ao, Ping

    2017-12-01

    An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.

  10. Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures

    NASA Astrophysics Data System (ADS)

    Shatalov, Alexander V.

    The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.

  11. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  12. Theoretical Studies of Chemical Reactions following Electronic Excitation

    NASA Technical Reports Server (NTRS)

    Chaban, Galina M.

    2003-01-01

    The use of multi-configurational wave functions is demonstrated for several processes: tautomerization reactions in the ground and excited states of the DNA base adenine, dissociation of glycine molecule after electronic excitation, and decomposition/deformation of novel rare gas molecules HRgF. These processes involve bond brealung/formation and require multi-configurational approaches that include dynamic correlation.

  13. GC × GC-TOFMS and supervised multivariate approaches to study human cadaveric decomposition olfactive signatures.

    PubMed

    Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François

    2015-06-01

    In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.

  14. Horizontal decomposition of data table for finding one reduct

    NASA Astrophysics Data System (ADS)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  15. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  16. On the thermomechanical coupling in dissipative materials: A variational approach for generalized standard materials

    NASA Astrophysics Data System (ADS)

    Bartels, A.; Bartel, T.; Canadija, M.; Mosler, J.

    2015-09-01

    This paper deals with the thermomechanical coupling in dissipative materials. The focus lies on finite strain plasticity theory and the temperature increase resulting from plastic deformation. For this type of problem, two fundamentally different modeling approaches can be found in the literature: (a) models based on thermodynamical considerations and (b) models based on the so-called Taylor-Quinney factor. While a naive straightforward implementation of thermodynamically consistent approaches usually leads to an over-prediction of the temperature increase due to plastic deformation, models relying on the Taylor-Quinney factor often violate fundamental physical principles such as the first and the second law of thermodynamics. In this paper, a thermodynamically consistent framework is elaborated which indeed allows the realistic prediction of the temperature evolution. In contrast to previously proposed frameworks, it is based on a fully three-dimensional, finite strain setting and it naturally covers coupled isotropic and kinematic hardening - also based on non-associative evolution equations. Considering a variationally consistent description based on incremental energy minimization, it is shown that the aforementioned problem (thermodynamical consistency and a realistic temperature prediction) is essentially equivalent to correctly defining the decomposition of the total energy into stored and dissipative parts. Interestingly, this decomposition shows strong analogies to the Taylor-Quinney factor. In this respect, the Taylor-Quinney factor can be well motivated from a physical point of view. Furthermore, certain intervals for this factor can be derived in order to guarantee that fundamental physically principles are fulfilled a priori. Representative examples demonstrate the predictive capabilities of the final constitutive modeling framework.

  17. Atomic-batched tensor decomposed two-electron repulsion integrals

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-01

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  18. Atomic-batched tensor decomposed two-electron repulsion integrals.

    PubMed

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-07

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  19. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  20. A comparative study on pyrolysis characteristic Indonesia biomassa and low grade coal

    NASA Astrophysics Data System (ADS)

    Adhityatama, G. I.; Hanif, F.; Cahyono, R. B.; Hidayat, M.; Akiyama, T.

    2017-05-01

    A comparative study on pyrolysis of biomass and low grade coal was conducted using a thermogravimetric analyzer. Each kind of biomass and coal has a characteristic pyrolysis behavior which is explained based on its individual component characteristics. All fuels experienced a small weight loss as temperatures approached 450K because of moisture evaporation. The coal had smallest total weight loss compared to biomass due to its high content of fixed carbon, suggesting that coal would produce high amounts of char and small amounts of volatile matter (e.g., tar and gas). The biomass exhibits similar tendency regarding the decomposition process which is the hemicelluloses break down first at temperatures of 470 to 530K, cellulose follows in the temperature range 510 to 620K, and lignin is the last component to pyrolyzer at temperatures of 550 to 770K. The thermal decomposition of biomass consisted of two predominant peaks corresponding first to the decomposition of cellulose and, second, to the decomposition of lignin. Meanwhile, the coal exhibited only single peak because these fuels were predominantly composed of carbon. Based on the kinetic analysis, coal have the smaller activation energy (55.32kJ/mol) compared to biomass (range from 89.80-172.86 kJ/mol). Pyrolysis process also created more pore material in the solid product. These results were important for the optimization of energy conversion from those solid fuels. Biomass resulted lower solid product and higher tar product, thus would be suitable for liquid and gas energy production.

  1. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  2. Big Data-Based Approach to Detect, Locate, and Enhance the Stability of an Unplanned Microgrid Islanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Li, Yan; Zhang, Yingchen

    In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detectmore » the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.« less

  3. Determination of gold and cobalt dopants in advanced materials based on tin oxide by slurry sampling high-resolution continuum source graphite furnace atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Filatova, Daria G.; Eskina, Vasilina V.; Baranovskaya, Vasilisa B.; Vladimirova, Svetlana A.; Gaskov, Alexander M.; Rumyantseva, Marina N.; Karpov, Yuri A.

    2018-02-01

    A novel approach is developed for the determination of Co and Au dopants in advanced materials based on tin oxide using high-resolution continuum source graphite furnace atomic absorption spectrometry (HR CS GFAAS) with direct slurry sampling. Sodium carboxylmethylcellulose (Na-CMC) is an effective stabilizer for diluted suspensions. Use Na-CMC allows to transfer the analytes into graphite furnace completely and reproducibly. The relative standard deviation obtained by HR CS GFAAS was not higher than 4%. Accuracy was proven by means inductively coupled plasma mass spectrometry (ICP-MS) in solutions after decomposition as a comparative technique. To determine Au and Co in the volume of SnO2, the acid decomposition conditions (HCl, HF) of the samples were suggested by means of an autoclave in a microwave oven.

  4. Nanoporous Substrate with Mixed Nanoclusters for Surface Enhanced Raman Scattering.

    NASA Astrophysics Data System (ADS)

    Chang, Sehoon; Ko, Hyunhyub; Singamaneni, Srikanth; Gunawidjaja, Ray; Tsukruk, Vladimir

    2009-03-01

    Rapid detection of plastic and liquid explosives is an urgent need due to various societal and technological reasons. We employed a novel design of surface enhanced Raman scattering (SERS)-active substrate based on porous alumina membranes decorated with mixed nanoclusters of gold nanorods and nanoparticles. We demonstrated trace level detection of several important explosives such as dinitrotolene (DNT), trinitrotoluene (TNT), and hexamethylenetriperoxidediamine (HMTD) by fast, sensitive, reliable Raman spectroscopic method. We achieved near molecular-level detection (about 15˜ 30 molecules) of DNT and TNT utilizing the SERS substrate. However, trace level detection is challenging due to the lack of common optical signatures (fluorescence, absorption in UV-vis range) or chemical functionality of peroxide-based explosives such as HMTD. To overcome this, we employed photochemical decomposition approach and analyzed chemical fragments using SERS. We suggest that tailored polymer coating, mixed nanoclusters, and laser-induced photocatalytic decomposition are all critical for achieving this unprecedented sensitivity level..

  5. Dynamics in the Decompositions Approach to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  6. Fast modal decomposition for optical fibers using digital holography.

    PubMed

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  7. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  8. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  9. Performance of today’s dual energy CT and future multi energy CT in virtual non-contrast imaging and in iodine quantification: A simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan

    2015-07-15

    Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less

  10. An intelligent decomposition approach for efficient design of non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1992-01-01

    The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.

  11. Multiple multicontrol unitary operations: Implementation and applications

    NASA Astrophysics Data System (ADS)

    Lin, Qing

    2018-04-01

    The efficient implementation of computational tasks is critical to quantum computations. In quantum circuits, multicontrol unitary operations are important components. Here, we present an extremely efficient and direct approach to multiple multicontrol unitary operations without decomposition to CNOT and single-photon gates. With the proposed approach, the necessary two-photon operations could be reduced from O( n 3) with the traditional decomposition approach to O( n), which will greatly relax the requirements and make large-scale quantum computation feasible. Moreover, we propose the potential application to the ( n- k)-uniform hypergraph state.

  12. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  13. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  14. A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen

    2016-06-01

    Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.

  15. MRF energy minimization and beyond via dual decomposition.

    PubMed

    Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios

    2011-03-01

    This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

  16. Idiopathic interstitial pneumonias and emphysema: detection and classification using a texture-discriminative approach

    NASA Astrophysics Data System (ADS)

    Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.

    2012-03-01

    Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.

  17. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  18. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  19. Delay decomposition approach to [Formula: see text] filtering analysis of genetic oscillator networks with time-varying delays.

    PubMed

    Revathi, V M; Balasubramaniam, P

    2016-04-01

    In this paper, the [Formula: see text] filtering problem is treated for N coupled genetic oscillator networks with time-varying delays and extrinsic molecular noises. Each individual genetic oscillator is a complex dynamical network that represents the genetic oscillations in terms of complicated biological functions with inner or outer couplings denote the biochemical interactions of mRNAs, proteins and other small molecules. Throughout the paper, first, by constructing appropriate delay decomposition dependent Lyapunov-Krasovskii functional combined with reciprocal convex approach, improved delay-dependent sufficient conditions are obtained to ensure the asymptotic stability of the filtering error system with a prescribed [Formula: see text] performance. Second, based on the above analysis, the existence of the designed [Formula: see text] filters are established in terms of linear matrix inequalities with Kronecker product. Finally, numerical examples including a coupled Goodwin oscillator model are inferred to illustrate the effectiveness and less conservatism of the proposed techniques.

  20. Separation of distinct photoexcitation species in femtosecond transient absorption microscopy

    DOE PAGES

    Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane; ...

    2016-02-03

    Femtosecond transient absorption microscopy is a novel chemical imaging capability with simultaneous high spatial and temporal resolution. Although several powerful data analysis approaches have been developed and successfully applied to separate distinct chemical species in such images, the application of such analysis to distinguish different photoexcited species is rare. In this paper, we demonstrate a combined approach based on phasor and linear decomposition analysis on a microscopic level that allows us to separate the contributions of both the excitons and free charge carriers in the observed transient absorption response of a composite organometallic lead halide perovskite film. We found spatialmore » regions where the transient absorption response was predominately a result of excitons and others where it was predominately due to charge carriers, and regions consisting of signals from both contributors. Lastly, quantitative decomposition of the transient absorption response curves further enabled us to reveal the relative contribution of each photoexcitation to the measured response at spatially resolved locations in the film.« less

  1. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  2. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  3. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE PAGES

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...

    2017-11-27

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  4. Hamiltonian formulation of the KdV equation

    NASA Astrophysics Data System (ADS)

    Nutku, Y.

    1984-06-01

    We consider the canonical formulation of Whitham's variational principle for the KdV equation. This Lagrangian is degenerate and we have found it necessary to use Dirac's theory of constrained systems in constructing the Hamiltonian. Earlier discussions of the Hamiltonian structure of the KdV equation were based on various different decompositions of the field which is avoided by this new approach.

  5. Motion magnification using the Hermite transform

    NASA Astrophysics Data System (ADS)

    Brieva, Jorge; Moya-Albor, Ernesto; Gomez-Coronel, Sandra L.; Escalante-Ramírez, Boris; Ponce, Hiram; Mora Esquivel, Juan I.

    2015-12-01

    We present an Eulerian motion magnification technique with a spatial decomposition based on the Hermite Transform (HT). We compare our results to the approach presented in.1 We test our method in one sequence of the breathing of a newborn baby and on an MRI left ventricle sequence. Methods are compared using quantitative and qualitative metrics after the application of the motion magnification algorithm.

  6. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  7. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  8. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowal, Grzegorz; Lazarian, A., E-mail: kowal@astro.wisc.ed, E-mail: lazarian@astro.wisc.ed

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field referencemore » frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.« less

  9. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  10. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    PubMed

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  11. Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition.

    PubMed

    Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C

    2015-06-08

    Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.

  12. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  13. Detecting phase-amplitude coupling with high frequency resolution using adaptive decompositions

    PubMed Central

    Pittman-Polletta, Benjamin; Hsieh, Wan-Hsin; Kaur, Satvinder; Lo, Men-Tzung; Hu, Kun

    2014-01-01

    Background Phase-amplitude coupling (PAC) – the dependence of the amplitude of one rhythm on the phase of another, lower-frequency rhythm – has recently been used to illuminate cross-frequency coordination in neurophysiological activity. An essential step in measuring PAC is decomposing data to obtain rhythmic components of interest. Current methods of PAC assessment employ narrowband Fourier-based filters, which assume that biological rhythms are stationary, harmonic oscillations. However, biological signals frequently contain irregular and nonstationary features, which may contaminate rhythms of interest and complicate comodulogram interpretation, especially when frequency resolution is limited by short data segments. New method To better account for nonstationarities while maintaining sharp frequency resolution in PAC measurement, even for short data segments, we introduce a new method of PAC assessment which utilizes adaptive and more generally broadband decomposition techniques – such as the empirical mode decomposition (EMD). To obtain high frequency resolution PAC measurements, our method distributes the PAC associated with pairs of broadband oscillations over frequency space according to the time-local frequencies of these oscillations. Comparison with existing methods We compare our novel adaptive approach to a narrowband comodulogram approach on a variety of simulated signals of short duration, studying systematically how different types of nonstationarities affect these methods, as well as on EEG data. Conclusions Our results show: (1) narrowband filtering can lead to poor PAC frequency resolution, and inaccuracy and false negatives in PAC assessment; (2) our adaptive approach attains better PAC frequency resolution and is more resistant to nonstationarities and artifacts than traditional comodulograms. PMID:24452055

  14. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Incorporating DSA in multipatterning semiconductor manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet

    2015-03-01

    Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.

  16. Bayesian inference of spectral induced polarization parameters for laboratory complex resistivity measurements of rocks and soils

    NASA Astrophysics Data System (ADS)

    Bérubé, Charles L.; Chouteau, Michel; Shamsipour, Pejman; Enkin, Randolph J.; Olivo, Gema R.

    2017-08-01

    Spectral induced polarization (SIP) measurements are now widely used to infer mineralogical or hydrogeological properties from the low-frequency electrical properties of the subsurface in both mineral exploration and environmental sciences. We present an open-source program that performs fast multi-model inversion of laboratory complex resistivity measurements using Markov-chain Monte Carlo simulation. Using this stochastic method, SIP parameters and their uncertainties may be obtained from the Cole-Cole and Dias models, or from the Debye and Warburg decomposition approaches. The program is tested on synthetic and laboratory data to show that the posterior distribution of a multiple Cole-Cole model is multimodal in particular cases. The Warburg and Debye decomposition approaches yield unique solutions in all cases. It is shown that an adaptive Metropolis algorithm performs faster and is less dependent on the initial parameter values than the Metropolis-Hastings step method when inverting SIP data through the decomposition schemes. There are no advantages in using an adaptive step method for well-defined Cole-Cole inversion. Finally, the influence of measurement noise on the recovered relaxation time distribution is explored. We provide the geophysics community with a open-source platform that can serve as a base for further developments in stochastic SIP data inversion and that may be used to perform parameter analysis with various SIP models.

  17. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  19. Learning inverse kinematics: reduced sampling through decomposition into virtual robots.

    PubMed

    de Angulo, Vicente Ruiz; Torras, Carme

    2008-12-01

    We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.

  20. Climate fails to predict wood decomposition at regional scales

    NASA Astrophysics Data System (ADS)

    Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.

    2014-07-01

    Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.

  1. Identifying key nodes in multilayer networks based on tensor decomposition.

    PubMed

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  2. Identifying key nodes in multilayer networks based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  3. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  4. Ex-ante and ex-post measurement of equality of opportunity in health: a normative decomposition.

    PubMed

    Donni, Paolo Li; Peragine, Vito; Pignataro, Giuseppe

    2014-02-01

    This paper proposes and discusses two different approaches to the definition of inequality in health: the ex-ante and the ex-post approach. It proposes strategies for measuring inequality of opportunity in health based on the path-independent Atkinson equality index. The proposed methodology is illustrated using data from the British Household Panel Survey; the results suggest that in the period 2000-2005, at least one-third of the observed health equalities in the UK were equalities of opportunity. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Wavelet subspace decomposition of thermal infrared images for defect detection in artworks

    NASA Astrophysics Data System (ADS)

    Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.

    2016-07-01

    Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.

  6. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  7. Intelligence Fusion Modeling. A Proposed Approach.

    DTIC Science & Technology

    1983-09-16

    based techniques developed by artificial intelligence researchers. This paper describes the application of these techniques in the modeling of an... intelligence requirements, although the methods presented are applicable . We treat PIR/IR as given. -7- -- -W V"W v* 1.- . :71.,v It k*~ ~-- Movement...items from the PIR/IR/HVT decomposition are received from the CMDS. Formatted tactical intelligence reports are received from sensors of like types

  8. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  9. Ordering Design Tasks Based on Coupling Strengths

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Bloebaum, C. L.

    1994-01-01

    The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.

  10. Ordering design tasks based on coupling strengths

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Bloebaum, Christina L.

    1994-01-01

    The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.

  11. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  12. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    PubMed

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  13. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  14. Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.

    PubMed

    Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani

    2015-02-01

    The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. An integrated spectroscopic and wet chemical approach to investigate grass litter decomposition chemistry

    USDA-ARS?s Scientific Manuscript database

    Litter decomposition is a key process for soil organic matter formation and terrestrial biogeochemistry. Yet we still lack complete understanding of the chemical transformations which occur in the litter residue as it decomposes. A number of methods such as bulk nutrient concentrations, chemical fra...

  16. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  17. The development of a new technical platform to measure soil organic nitrogen cycling processes by microbes

    NASA Astrophysics Data System (ADS)

    Hu, Yuntao; Richter, Andreas; Wanek, Wolfgang

    2016-04-01

    Soil organic matter (SOM) decomposition is one of the most important processes of the global nitrogen cycle, having strong implications on soil N availability, terrestrial carbon cycling and soil carbon sequestration. During SOM decomposition low-molecular weight organic nitrogen (LMWON) is released which can be taken up by microbes (and plants). The breakdown of high-molecular weight organic nitrogen (HMWON, e.g. proteins, peptidoglycan, chitin, nucleic acids) represents the bottleneck of soil HMWON decomposition and is performed by extracellular enzymes released mainly by soil microorganisms. Despite that, the current understanding of the controls of these processes is incomplete. The only way to measure gross decomposition rates of these polymers is to use isotope pool dilution (IPD) techniques. In IPD approaches the product pool is isotopically enriched (by e.g. 15N) and the isotope dilution of this pool is measured over time. We have pioneered an IPD for protein and cellulose depolymerization, but IPD approaches for other polymers, specifically for important microbial necromass components such as chitin (fungi) and peptidoglycan (bacteria), or nucleic acids have not yet been developed. Here we present a workflow based on a universally applicable technical platform that allows to estimate the gross depolymerization rate of SOM (HMWON) at the molecular level, using ultra high performance liquid chromatography/high resolution Orbitrap mass spectrometry (UPLC/HRMS) combined with IPD techniques. The necessary isotopically labeled organic polymers (chitin, peptidoglycan and others) are extracted from laboratory bacterial and fungal cultures grown in fully isotopically labeled nutrient media (15N, 13C or both). A purification scheme for the different polymers is currently established. Labeled potential decomposition products (e.g. amino sugars and muropeptides from peptidoglycan, amino sugars and chitooligosaccharides from chitin, nucleotides and nucleosides from nucleic acids) are prepared by enzymatic and/or acid digestion of the polymers. Different UPLC separation columns (Hypercarb, HiliC and C18) make it possible to separate more than 100 related monomers and oligomers produced during polymer decomposition, a prerequisite for analyzing the concentrations and isotope kinetics of decomposition products in complex soil samples. The benchtop Orbitrap mass analyzer has a nominal mass resolving power of 100,000 (FWHM at m/z 200), which enables us to separate compounds that are 13C- and 15N-labelled (mass difference: 0.00632) in the same compound, allowing tracing carbon and nitrogen isotopes in the same compound in IPD experiments. With the accurate masses, retention times and the isotopic pattern we can quantify and qualify the target decomposition products and their isotope kinetics during soil incubation experiments. This will enable us to estimate in situ decomposition rates of the major organic nitrogen polymers in soils, allowing new insights into the major controls of the most important step in soil organic nitrogen recycling.

  18. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  19. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  1. Decomposition patterns of buried remains at different intervals in the Central Highveld region of South Africa.

    PubMed

    Marais-Werner, A; Myburgh, J; Meyer, A; Nienaber, W C; Steyn, M

    2017-07-01

    Burial of remains is an important factor when one attempts to establish the post-mortem interval as it reduces, and in extreme cases, excludes oviposition by Diptera species. This in turn leads to modification of the decomposition process. The aim of this study was to record decomposition patterns of buried remains using a pig model. The pattern of decomposition was evaluated at different intervals and recorded according to existing guidelines. In order to contribute to our knowledge on decomposition in different settings, a quantifiable approach was followed. Results indicated that early stages of decomposition occurred rapidly for buried remains within 7-33 days. Between 14 and 33 days, buried pigs displayed common features associated with the early to middle stages of decomposition, such as discoloration and bloating. From 33 to 90 days advanced decomposition manifested on the remains, and pigs then reached a stage of advanced decomposition where little change was observed in the next ±90-183 days after interment. Throughout this study, total body scores remained higher for surface remains. Overall, buried pigs followed a similar pattern of decomposition to those of surface remains, although at a much slower rate when compared with similar post-mortem intervals in surface remains. In this study, the decomposition patterns and rates of buried remains were mostly influenced by limited insect activity and adipocere formation which reduces the rate of decay in a conducive environment (i.e. burial in soil).

  2. Validation of Distributed Soil Moisture: Airborne Polarimetric SAR vs. Ground-based Sensor Networks

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Kohling, M.; Hajnsek, I.; Montzka, C.; Papathanassiou, K. P.

    2012-04-01

    The knowledge of spatially distributed soil moisture is highly desirable for an enhanced hydrological modeling in terms of flood prevention and for yield optimization in combination with precision farming. Especially in mid-latitudes, the growing agricultural vegetation results in an increasing soil coverage along the crop cycle. For a remote sensing approach, this vegetation influence has to be separated from the soil contribution within the resolution cell to extract the actual soil moisture. Therefore a hybrid decomposition was developed for estimation of soil moisture under vegetation cover using fully polarimetric SAR data. The novel polarimetric decomposition combines a model-based decomposition, separating the volume component from the ground components, with an eigen-based decomposition of the two ground components into a surface and a dihedral scattering contribution. Hence, this hybrid decomposition, which is based on [1,2], establishes an innovative way to retrieve soil moisture under vegetation. The developed inversion algorithm for soil moisture under vegetation cover is applied on fully polarimetric data of the TERENO campaign, conducted in May and June 2011 for the Rur catchment within the Eifel/Lower Rhine Valley Observatory. The fully polarimetric SAR data were acquired in high spatial resolution (range: 1.92m, azimuth: 0.6m) by DLR's novel F-SAR sensor at L-band. The inverted soil moisture product from the airborne SAR data is validated with corresponding distributed ground measurements for a quality assessment of the developed algorithm. The in situ measurements were obtained on the one hand by mobile FDR probes from agricultural fields near the towns of Merzenhausen and Selhausen incorporating different crop types and on the other hand by distributed wireless sensor networks (SoilNet clusters) from a grassland test site (near the town of Rollesbroich) and from a forest stand (within the Wüstebach sub-catchment). Each SoilNet cluster incorporates around 150 wireless measuring devices on a grid of approximately 30ha for distributed soil moisture sensing. Finally, the comparison of both distributed soil moisture products results in a discussion on potentials and limitations for obtaining soil moisture under vegetation cover with high resolution fully polarimetric SAR. [1] S.R. Cloude, Polarisation: applications in remote sensing. Oxford, Oxford University Press, 2010. [2] Jagdhuber, T., Hajnsek, I., Papathanassiou, K.P. and Bronstert, A.: A Hybrid Decomposition for Soil Moisture Estimation under Vegetation Cover Using Polarimetric SAR. Proc. of the 5th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, ESA-ESRIN, Frascati, Italy, January 24-28, 2011, p.1-6.

  3. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less

  4. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  5. Generalized first-order kinetic model for biosolids decomposition and oxidation during hydrothermal treatment.

    PubMed

    Shanableh, A

    2005-01-01

    The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.

  6. Deorientation of PolSAR coherency matrix for volume scattering retrieval

    NASA Astrophysics Data System (ADS)

    Kumar, Shashi; Garg, R. D.; Kushwaha, S. P. S.

    2016-05-01

    Polarimetric SAR data has proven its potential to extract scattering information for different features appearing in single resolution cell. Several decomposition modelling approaches have been developed to retrieve scattering information from PolSAR data. During scattering power decomposition based on physical scattering models it becomes very difficult to distinguish volume scattering as a result from randomly oriented vegetation from scattering nature of oblique structures which are responsible for double-bounce and volume scattering , because both are decomposed in same scattering mechanism. The polarization orientation angle (POA) of an electromagnetic wave is one of the most important character which gets changed due to scattering from geometrical structure of topographic slopes, oriented urban area and randomly oriented features like vegetation cover. The shift in POA affects the polarimetric radar signatures. So, for accurate estimation of scattering nature of feature compensation in polarization orientation shift becomes an essential procedure. The prime objective of this work was to investigate the effect of shift in POA in scattering information retrieval and to explore the effect of deorientation on regression between field-estimated aboveground biomass (AGB) and volume scattering. For this study Dudhwa National Park, U.P., India was selected as study area and fully polarimetric ALOS PALSAR data was used to retrieve scattering information from the forest area of Dudhwa National Park. Field data for DBH and tree height was collect for AGB estimation using stratified random sampling. AGB was estimated for 170 plots for different locations of the forest area. Yamaguchi four component decomposition modelling approach was utilized to retrieve surface, double-bounce, helix and volume scattering information. Shift in polarization orientation angle was estimated and deorientation of coherency matrix for compensation of POA shift was performed. Effect of deorientation on RGB color composite for the forest area can be easily seen. Overestimation of volume scattering and under estimation of double bounce scattering was recorded for PolSAR decomposition without deorientation and increase in double bounce scattering and decrease in volume scattering was noticed after deorientation. This study was mainly focused on volume scattering retrieval and its relation with field estimated AGB. Change in volume scattering after POA compensation of PolSAR data was recorded and a comparison was performed on volume scattering values for all the 170 forest plots for which field data were collected. Decrease in volume scattering after deorientation was noted for all the plots. Regression between PolSAR decomposition based volume scattering and AGB was performed. Before deorientation, coefficient determination (R2) between volume scattering and AGB was 0.225. After deorientation an improvement in coefficient of determination was found and the obtained value was 0.613. This study recommends deorientation of PolSAR data for decomposition modelling to retrieve reliable volume scattering information from forest area.

  7. Uncertainty in the fate of soil organic carbon: A comparison of three conceptually different soil decomposition models

    USGS Publications Warehouse

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; McGuire, A. David; Zhu, Qing; Liu, Yaling; Teskey, Robert O.

    2014-01-01

    Conventional Q10 soil organic matter decomposition models and more complex microbial models are available for making projections of future soil carbon dynamics. However, it is unclear (1) how well the conceptually different approaches can simulate observed decomposition and (2) to what extent the trajectories of long-term simulations differ when using the different approaches. In this study, we compared three structurally different soil carbon (C) decomposition models (one Q10 and two microbial models of different complexity), each with a one- and two-horizon version. The models were calibrated and validated using 4 years of measurements of heterotrophic soil CO2 efflux from trenched plots in a Dahurian larch (Larix gmelinii Rupr.) plantation. All models reproduced the observed heterotrophic component of soil CO2 efflux, but the trajectories of soil carbon dynamics differed substantially in 100 year simulations with and without warming and increased litterfall input, with microbial models that produced better agreement with observed changes in soil organic C in long-term warming experiments. Our results also suggest that both constant and varying carbon use efficiency are plausible when modeling future decomposition dynamics and that the use of a short-term (e.g., a few years) period of measurement is insufficient to adequately constrain model parameters that represent long-term responses of microbial thermal adaption. These results highlight the need to reframe the representation of decomposition models and to constrain parameters with long-term observations and multiple data streams. We urge caution in interpreting future soil carbon responses derived from existing decomposition models because both conceptual and parameter uncertainties are substantial.

  8. Adiabatic markovian dynamics.

    PubMed

    Oreshkov, Ognyan; Calsamiglia, John

    2010-07-30

    We propose a theory of adiabaticity in quantum markovian dynamics based on a decomposition of the Hilbert space induced by the asymptotic behavior of the Lindblad semigroup. A central idea of our approach is that the natural generalization of the concept of eigenspace of the Hamiltonian in the case of markovian dynamics is a noiseless subsystem with a minimal noisy cofactor. Unlike previous attempts to define adiabaticity for open systems, our approach deals exclusively with physical entities and provides a simple, intuitive picture at the Hilbert-space level, linking the notion of adiabaticity to the theory of noiseless subsystems. As two applications of our theory, we propose a general framework for decoherence-assisted computation in noiseless codes and a dissipation-driven approach to holonomic computation based on adiabatic dragging of subsystems that is generally not achievable by nondissipative means.

  9. Dilation and Hypertrophy: A Cell-Based Continuum Mechanics Approach Towards Ventricular Growth and Remodeling

    NASA Astrophysics Data System (ADS)

    Ulerich, J.; Göktepe, S.; Kuhl, E.

    This manuscript presents a continuum approach towards cardiac growth and remodeling that is capable to predict chronic maladaptation of the heart in response to changes in mechanical loading. It is based on the multiplicative decomposition of the deformation gradient into and elastic and a growth part. Motivated by morphological changes in cardiomyocyte geometry, we introduce an anisotropic growth tensor that can capture both hypertrophic wall thickening and ventricular dilation within one generic concept. In agreement with clinical observations, we propose wall thickening to be a stress-driven phenomenon whereas dilation is introduced as a strain-driven process. The features of the proposed approach are illustrated in terms of the adaptation of thin heart slices and in terms overload-induced dilation in a generic bi-ventricular heart model.

  10. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  11. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  12. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.

  13. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  14. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  15. Vibration fatigue using modal decomposition

    NASA Astrophysics Data System (ADS)

    Mršnik, Matjaž; Slavič, Janko; Boltežar, Miha

    2018-01-01

    Vibration-fatigue analysis deals with the material fatigue of flexible structures operating close to natural frequencies. Based on the uniaxial stress response, calculated in the frequency domain, the high-cycle fatigue model using the S-N curve material data and the Palmgren-Miner hypothesis of damage accumulation is applied. The multiaxial criterion is used to obtain the equivalent uniaxial stress response followed by the spectral moment approach to the cycle-amplitude probability density estimation. The vibration-fatigue analysis relates the fatigue analysis in the frequency domain to the structural dynamics. However, once the stress response within a node is obtained, the physical model of the structure dictating that response is discarded and does not propagate through the fatigue-analysis procedure. The structural model can be used to evaluate how specific dynamic properties (e.g., damping, modal shapes) affect the damage intensity. A new approach based on modal decomposition is presented in this research that directly links the fatigue-damage intensity with the dynamic properties of the system. It thus offers a valuable insight into how different modes of vibration contribute to the total damage to the material. A numerical study was performed showing good agreement between results obtained using the newly presented approach with those obtained using the classical method, especially with regards to the distribution of damage intensity and critical point location. The presented approach also offers orders of magnitude faster calculation in comparison with the conventional procedure. Furthermore, it can be applied in a straightforward way to strain experimental modal analysis results, taking advantage of experimentally measured strains.

  16. s-core network decomposition: A generalization of k-core analysis to weighted networks

    NASA Astrophysics Data System (ADS)

    Eidsaa, Marius; Almaas, Eivind

    2013-12-01

    A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.

  17. High-purity Cu nanocrystal synthesis by a dynamic decomposition method.

    PubMed

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  18. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    NASA Astrophysics Data System (ADS)

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  19. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  20. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  1. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  2. Direct Extraction of Tumor Response Based on Ensemble Empirical Mode Decomposition for Image Reconstruction of Early Breast Cancer Detection by UWB.

    PubMed

    Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro

    2015-10-01

    A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.

  3. Distribution of apparent activation energy counterparts during thermo - And thermo-oxidative degradation of Aronia melanocarpa (black chokeberry).

    PubMed

    Janković, Bojan; Marinović-Cincović, Milena; Janković, Marija

    2017-09-01

    Kinetics of degradation for Aronia melanocarpa fresh fruits in argon and air atmospheres were investigated. The investigation was based on probability distributions of apparent activation energy of counterparts (ε a ). Isoconversional analysis results indicated that the degradation process in an inert atmosphere was governed by decomposition reactions of esterified compounds. Also, based on same kinetics approach, it was assumed that in an air atmosphere, the primary compound in degradation pathways could be anthocyanins, which undergo rapid chemical reactions. A new model of reactivity demonstrated that, under inert atmospheres, expectation values for ε a occured at levels of statistical probability. These values corresponded to decomposition processes in which polyphenolic compounds might be involved. ε a values obeyed laws of binomial distribution. It was established that, for thermo-oxidative degradation, Poisson distribution represented a very successful approximation for ε a values where there was additional mechanistic complexity and the binomial distribution was no longer valid. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. One-Channel Surface Electromyography Decomposition for Muscle Force Estimation.

    PubMed

    Sun, Wentao; Zhu, Jinying; Jiang, Yinlai; Yokoi, Hiroshi; Huang, Qiang

    2018-01-01

    Estimating muscle force by surface electromyography (sEMG) is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs) in two steps: (1) learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2) extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.

  5. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  6. An efficient approach for pixel decomposition to increase the spatial resolution of land surface temperature images from MODIS thermal infrared band data.

    PubMed

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2014-12-25

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world.

  7. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  8. Flexible Mediation Analysis With Multiple Mediators.

    PubMed

    Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2017-07-15

    The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  10. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Enhance the Quality of Crowdsensing for Fine-Grained Urban Environment Monitoring via Data Correlation

    PubMed Central

    Kang, Xu; Liu, Liang; Ma, Huadong

    2017-01-01

    Monitoring the status of urban environments, which provides fundamental information for a city, yields crucial insights into various fields of urban research. Recently, with the popularity of smartphones and vehicles equipped with onboard sensors, a people-centric scheme, namely “crowdsensing”, for city-scale environment monitoring is emerging. This paper proposes a data correlation based crowdsensing approach for fine-grained urban environment monitoring. To demonstrate urban status, we generate sensing images via crowdsensing network, and then enhance the quality of sensing images via data correlation. Specifically, to achieve a higher quality of sensing images, we not only utilize temporal correlation of mobile sensing nodes but also fuse the sensory data with correlated environment data by introducing a collective tensor decomposition approach. Finally, we conduct a series of numerical simulations and a real dataset based case study. The results validate that our approach outperforms the traditional spatial interpolation-based method. PMID:28054968

  12. Contrasting effects of plant species traits and moisture on the decomposition of multiple litter fractions.

    PubMed

    Riggs, Charlotte E; Hobbie, Sarah E; Cavender-Bares, Jeannine; Savage, Jessica A; Wei, Xiaojing

    2015-10-01

    Environmental variation in moisture directly influences plant litter decomposition through effects on microbial activity, and indirectly via plant species traits. Whether the effects of moisture and plant species traits are mutually reinforcing or counteracting during decomposition are unknown. To disentangle the effects of moisture from the effects of species traits that vary with moisture, we decomposed leaf litter from 12 plant species in the willow family (Salicaceae) with different native habitat moisture preferences in paired mesic and wetland plots. We fit litter mass loss data to an exponential decomposition model and estimated the decay rate of the rapidly cycling litter fraction and size of the remaining fraction that decays at a rate approaching zero. Litter traits that covaried with moisture in the species' native habitat significantly influenced the decomposition rate of the rapidly cycling litter fraction, but moisture in the decomposition environment did not. In contrast, for the slowly cycling litter fraction, litter traits that did not covary with moisture in the species' native habitat and moisture in the decomposition environment were significant. Overall, the effects of moisture and plant species traits on litter decomposition were somewhat reinforcing along a hydrologic gradient that spanned mesic upland to wetland (but not permanently surface-saturated) plots. In this system, plant trait and moisture effects may lead to greater in situ decomposition rates of wetland species compared to upland species; however, plant traits that do not covary with moisture will also influence decomposition of the slowest cycling litter fraction.

  13. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  14. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  15. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  16. Classification of fully polarimetric F-SAR ( X / S ) airborne radar images using decomposition methods. (Polish Title: Klasyfikacja treści polarymetrycznych obrazów radarowych z wykorzystaniem metod dekompozycji na przykładzie systemu F-SAR ( X / S ))

    NASA Astrophysics Data System (ADS)

    Mleczko, M.

    2014-12-01

    Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities

  17. An Intelligent Pattern Recognition System Based on Neural Network and Wavelet Decomposition for Interpretation of Heart Sounds

    DTIC Science & Technology

    2001-10-25

    wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A

  18. The Utility of Decomposition and Associated Microbial Parameters to Assess Changes in Stream Ecosystems due to Eutrophication

    NASA Astrophysics Data System (ADS)

    Gulis, V.; Ferreira, V. J.; Graca, M. A.

    2005-05-01

    Traditional approaches to assess stream ecosystem health rely on structural parameters, e.g. a variety of biotic indices. The goal of the Europe-wide RivFunction project is to develop methodology that uses functional parameters (e.g. plant litter decomposition) to this end. Here we report on decomposition experiments carried out in Portugal in five pairs of streams that differed in dissolved inorganic nutrients. On average, decomposition rates of alder and oak leaves were 2.8 and 1.4 times higher in high nutrient streams in coarse and fine mesh bags, respectively, than in corresponding reference streams. Breakdown rate correlated better with stream water SRP concentration rather than TIN. Fungal biomass and sporulation rates of aquatic hyphomycetes associated with decomposing leaves were stimulated by higher nutrient levels. Both fungal parameters measured at very early stages of decomposition (e.g. days 7-13) correlated well with overall decomposition rates. Eutrophication had no significant effect on shredder abundances in leaf bags but species richness was higher in disturbed streams. Decomposition is a key functional parameter in streams integrating many other variables and can be useful in assessing stream ecosystem health. We also argue that because decomposition is often controlled by fungal activity, microbial parameters can also be useful in bioassessment.

  19. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  20. Kinetics of the cellular decomposition of supersaturated solid solutions

    NASA Astrophysics Data System (ADS)

    Ivanov, M. A.; Naumuk, A. Yu.

    2014-09-01

    A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.

  1. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  2. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  3. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  4. An ordinal classification approach for CTG categorization.

    PubMed

    Georgoulas, George; Karvelis, Petros; Gavrilis, Dimitris; Stylios, Chrysostomos D; Nikolakopoulos, George

    2017-07-01

    Evaluation of cardiotocogram (CTG) is a standard approach employed during pregnancy and delivery. But, its interpretation requires high level expertise to decide whether the recording is Normal, Suspicious or Pathological. Therefore, a number of attempts have been carried out over the past three decades for development automated sophisticated systems. These systems are usually (multiclass) classification systems that assign a category to the respective CTG. However most of these systems usually do not take into consideration the natural ordering of the categories associated with CTG recordings. In this work, an algorithm that explicitly takes into consideration the ordering of CTG categories, based on binary decomposition method, is investigated. Achieved results, using as a base classifier the C4.5 decision tree classifier, prove that the ordinal classification approach is marginally better than the traditional multiclass classification approach, which utilizes the standard C4.5 algorithm for several performance criteria.

  5. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  6. Simulation of Decomposition Kinetics of Supercooled Austenite in Powder Steel

    NASA Astrophysics Data System (ADS)

    Tsyganova, M. S.; Ivashko, A. G.; Polyshuk, I. N.; Nabatov, R. I.; Tsyganova, A. I.

    2017-10-01

    To approve heat treatment of steel modes, quantitative data on austenite decomposition are required. Gaining these data experimentally appears to be extremely complicated. In present work, few approaches to simulate the phase transformation process are proposed considering structure characteristics of powder steels. Results of comparative analysis of these approaches are also given. Predicting the transformation kinetics by simulation is verified for PK40N2M (0.38% C, 2.10% Ni, 0.40% Mo) steel with 3% porosity and PK80 (0.80% C) steel with different porosity using published experimental data.

  7. Singular value decomposition approach to the yttrium occurrence in mineral maps of rare earth element ores using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Romppanen, Sari; Häkkänen, Heikki; Kaski, Saara

    2017-08-01

    Laser-induced breakdown spectroscopy (LIBS) has been used in analysis of rare earth element (REE) ores from the geological formation of Norra Kärr Alkaline Complex in southern Sweden. Yttrium has been detected in eudialyte (Na15 Ca6(Fe,Mn)3 Zr3Si(Si25O73)(O,OH,H2O)3 (OH,Cl)2) and catapleiite (Ca/Na2ZrSi3O9·2H2O). Singular value decomposition (SVD) has been employed in classification of the minerals in the rock samples and maps representing the mineralogy in the sampled area have been constructed. Based on the SVD classification the percentage of the yttrium-bearing ore minerals can be calculated even in fine-grained rock samples.

  8. Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.

    Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less

  9. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  10. Wavelet decomposition and radial basis function networks for system monitoring

    NASA Astrophysics Data System (ADS)

    Ikonomopoulos, A.; Endou, A.

    1998-10-01

    Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.

  11. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    PubMed

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  12. Acidic attack of perfluorinated alkyl ether lubricant molecules by metal oxide surfaces

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Faut, Owen D.

    1990-01-01

    The reactions of linear perfluoropolyalkylether (PFAE) lubricants with alpha-Fe203 and Fe203-based solid superacids were studied. The reaction with alpha-Fe203 proceeds in two stages. The first stage is an initial slow catalytic decomposition of the fluid. This reaction releases reactive gaseous products which attach the metal oxide and convert it to FeF3. The second stage is a more rapid decomposition of the fluid, effected by the surface FeF3. A study of the initial breakdown step was performed using alpha-Fe203, alpha-Fe203 preconverted to FeF3, and sulfate-promoted alpha-Fe203 superacids. The results indicate that the breakdown reaction involves acidic attack at fluorine atoms on acetal carbons in the linear PFAE. Possible approaches to combat the problem are outlined.

  13. The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.

    PubMed

    Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E

    2018-05-01

    In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  15. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  16. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Recognizing of stereotypic patterns in epileptic EEG using empirical modes and wavelets

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Sitnikova, E.; Pavlov, A. N.; Koronovskii, A. A.; Hramov, A. E.

    2017-11-01

    Epileptic activity in the form of spike-wave discharges (SWD) appears in the electroencephalogram (EEG) during absence seizures. This paper evaluates two approaches for detecting stereotypic rhythmic activities in EEG, i.e., the continuous wavelet transform (CWT) and the empirical mode decomposition (EMD). The CWT is a well-known method of time-frequency analysis of EEG, whereas EMD is a relatively novel approach for extracting signal's waveforms. A new method for pattern recognition based on combination of CWT and EMD is proposed. It was found that this combined approach resulted to the sensitivity of 86.5% and specificity of 92.9% for sleep spindles and 97.6% and 93.2% for SWD, correspondingly. Considering strong within- and between-subjects variability of sleep spindles, the obtained efficiency in their detection was high in comparison with other methods based on CWT. It is concluded that the combination of a wavelet-based approach and empirical modes increases the quality of automatic detection of stereotypic patterns in rat's EEG.

  18. Quantifying the effect of plant growth on litter decomposition using a novel, triple-isotope label approach

    NASA Astrophysics Data System (ADS)

    Ernakovich, J. G.; Baldock, J.; Carter, T.; Davis, R. A.; Kalbitz, K.; Sanderman, J.; Farrell, M.

    2017-12-01

    Microbial degradation of plant detritus is now accepted as a major stabilizing process of organic matter in soils. Most of our understanding of the dynamics of decomposition come from laboratory litter decay studies in the absence of plants, despite the fact that litter decays in the presence of plants in many native and managed systems. There is growing evidence that living plants significantly impact the degradation and stabilization of litter carbon (C) due to changes in the chemical and physical nature of soils in the rhizosphere. For example, mechanistic studies have observed stimulatory effects of root exudates on litter decomposition, and greenhouse studies have shown that living plants accelerate detrital decay. Despite this, we lack a quantitative understanding of the contribution of living plants to litter decomposition and how interactions of these two sources of C build soil organic matter (SOM). We used a novel triple-isotope approach to determine the effect of living plants on litter decomposition and C cycling. In the first stage of the experiment, we grew a temperate grass commonly used for forage, Poa labillardieri, in a continuously-labelled atmosphere of 14CO2 fertilized with K15NO3, such that the grass biomass was uniformly labelled with 14C and 15N. In the second stage, we constructed litter decomposition mescososms with and without a living plant to test for the effect of a growing plant on litter decomposition. The 14C/15N litter was decomposed in a sandy clay loam while a temperate forage grass, Lolium perenne, grew in an atmosphere of enriched 13CO2. The fate of the litter-14C/15N and plant-13C was traced into soil mineral fractions and dissolved organic matter (DOM) over the course of nine weeks using four destructive harvests of the mesocosms. Our preliminary results suggest that living plants play a major role in the degradation of plant litter, as litter decomposition was greater, both in rate and absolute amount, for soil mesocosms with a growing plant. Our observations during the decomposition experiment suggests that plant roots physically disrupted litter to increase decomposition. Isotopic analyses are currently underway, and transformations of litter-14C will be presented. Refining our understanding of in situ litter decay will add to our growing knowledge of the C cycle.

  19. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  20. Young Children's Thinking about Decomposition: Early Modeling Entrees to Complex Ideas in Science

    ERIC Educational Resources Information Center

    Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona

    2013-01-01

    This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included…

  1. Strawman Distributed Interactive Simulation Architecture Description Document. Volume 2. Supporting Rationale. Book 2. DIS Architecture Issues

    DTIC Science & Technology

    1992-03-31

    the-loop, interactive training environment. Its primary advantage is that it has a long history of use and a number of experienced users. However...programmer teams. Mazda IsU ADST/WDLPr,-92.OO8O1O 2 The Object Oriented Behavioral Decomposition Approach Object oriented behavioral decomposition is

  2. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  3. Bearing performance degradation assessment based on a combination of empirical mode decomposition and k-medoids clustering

    NASA Astrophysics Data System (ADS)

    Rai, Akhand; Upadhyay, S. H.

    2017-09-01

    Bearing is the most critical component in rotating machinery since it is more susceptible to failure. The monitoring of degradation in bearings becomes of great concern for averting the sudden machinery breakdown. In this study, a novel method for bearing performance degradation assessment (PDA) based on an amalgamation of empirical mode decomposition (EMD) and k-medoids clustering is encouraged. The fault features are extracted from the bearing signals using the EMD process. The extracted features are then subjected to k-medoids based clustering for obtaining the normal state and failure state cluster centres. A confidence value (CV) curve based on dissimilarity of the test data object to the normal state is obtained and employed as the degradation indicator for assessing the health of bearings. The proposed outlook is applied on the vibration signals collected in run-to-failure tests of bearings to assess its effectiveness in bearing PDA. To validate the superiority of the suggested approach, it is compared with commonly used time-domain features RMS and kurtosis, well-known fault diagnosis method envelope analysis (EA) and existing PDA classifiers i.e. self-organizing maps (SOM) and Fuzzy c-means (FCM). The results demonstrate that the recommended method outperforms the time-domain features, SOM and FCM based PDA in detecting the early stage degradation more precisely. Moreover, EA can be used as an accompanying method to confirm the early stage defect detected by the proposed bearing PDA approach. The study shows the potential application of k-medoids clustering as an effective tool for PDA of bearings.

  4. Direct CP asymmetry in D → π-π+ and D → K-K+ in QCD-based approach

    NASA Astrophysics Data System (ADS)

    Khodjamirian, Alexander; Petrov, Alexey A.

    2017-11-01

    We present the first QCD-based calculation of hadronic matrix elements with penguin topology determining direct CP-violating asymmetries in D0 →π-π+ and D0 →K-K+ nonleptonic decays. The method is based on the QCD light-cone sum rules and does not rely on any model-inspired amplitude decomposition, instead leaning heavily on quark-hadron duality. We provide a Standard Model estimate of the direct CP-violating asymmetries in both pion and kaon modes and their difference and comment on further improvements of the presented computation.

  5. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  6. An efficient and general approach for implementing thermodynamic phase equilibria information in geophysical and geodynamic studies

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos; Zlotnik, Sergio; Díez, Pedro

    2015-10-01

    We present a flexible, general, and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on Tensor Rank Decomposition methods, which transform the original multidimensional discrete information into a separated representation that contains significantly fewer terms, thus drastically reducing the amount of information to be stored in memory during a numerical simulation or geophysical inversion. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore, it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g., preliminary runs versus full resolution runs). We illustrate the benefits, generality, and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies. MATLAB implementations of the method and examples are provided as supporting information and can be downloaded from the journal's website.

  7. Losses of soil organic carbon by converting tropical forest to plantations: Assessment of erosion and decomposition by new δ13C approach

    NASA Astrophysics Data System (ADS)

    Guillaume, Thomas; Muhammad, Damris; Kuzyakov, Yakov

    2015-04-01

    Indonesia lost more tropical forest than all of Brazil in 2012, mainly driven by the rubber, oil palm and timber industries. Nonetheless, the effects of converting forest to oil palm and rubber plantations on soil organic carbon (SOC) stocks remain unclear. We analyzed SOC losses after lowland rainforest conversion to oil palm, intensive rubber and extensive rubber plantations in Jambi province on Sumatra Island. We developed and applied a new δ13C based approach to assess and separate two processes: 1) erosion and 2) decomposition. Carbon contents in the Ah horizon under oil palm and rubber plantations were strongly reduced: up to 70% and 62%, respectively. The decrease was lower under extensive rubber plantations (41%). The C content in the subsoil was similar in the forest and the plantations. We therefore assumed that a shift to higher δ13C values in the subsoil of the plantations corresponds to the losses of the upper soil layer by erosion. Erosion was estimated by comparing the δ13C profiles in the undisturbed soils under forest with the disturbed soils under plantations. The estimated erosion was the strongest in oil palm (35±8 cm) and rubber (33±10 cm) plantations. The 13C enrichment of SOC used as a proxy of its turnover indicates a decrease of SOC decomposition rate in the Ah horizon under oil palm plantations after forest conversion. SOC availability, measured by microbial respiration rate and Fourier Transformed Infrared Spectroscopy, was lower under oil palm plantations. Despite similar trends in C losses and erosion in intensive plantations, our results indicate that microorganisms in oil palm plantations mineralized mainly the old C stabilized prior to conversion, whereas microorganisms under rubber plantations mineralized the fresh C from the litter, leaving the old C pool mainly untouched. Based on the lack of C input from litter, we expect further losses of SOC under oil palm plantations, which therefore are a less sustainable land-use compared to rubber plantations. Finally, we discussed the advantages and limitations of the new δ13C based approach to assess erosion and decomposition as well as possibilities for its development and broader application. The reestablishment of new oil palm plantations has just started in the studied region. We therefore advise 1) to reduce the period without soil protection by planting cover crops at the early stage of the establishment to reduce soil erosion and 2) to leave a maximum of the biomass from the old palm trees on site and/or to keep the land lying fallow for a few years to enable the reconstruction of the SOC pool for the next oil palm generation.

  8. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  9. Decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph.

    PubMed

    Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping

    2004-08-12

    Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/

  10. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  11. Profiling physicochemical and planktonic features from discretely/continuously sampled surface water.

    PubMed

    Oita, Azusa; Tsuboi, Yuuri; Date, Yasuhiro; Oshima, Takahiro; Sakata, Kenji; Yokoyama, Akiko; Moriya, Shigeharu; Kikuchi, Jun

    2018-04-24

    There is an increasing need for assessing aquatic ecosystems that are globally endangered. Since aquatic ecosystems are complex, integrated consideration of multiple factors utilizing omics technologies can help us better understand aquatic ecosystems. An integrated strategy linking three analytical (machine learning, factor mapping, and forecast-error-variance decomposition) approaches for extracting the features of surface water from datasets comprising ions, metabolites, and microorganisms is proposed herein. The three developed approaches can be employed for diverse datasets of sample sizes and experimentally analyzed factors. The three approaches are applied to explore the features of bay water surrounding Odaiba, Tokyo, Japan, as a case study. Firstly, the machine learning approach separated 681 surface water samples within Japan into three clusters, categorizing Odaiba water into seawater with relatively low inorganic ions, including Mg, Ba, and B. Secondly, the factor mapping approach illustrated Odaiba water samples from the summer as rich in multiple amino acids and some other metabolites and poor in inorganic ions relative to other seasons based on their seasonal dynamics. Finally, forecast-error-variance decomposition using vector autoregressive models indicated that a type of microalgae (Raphidophyceae) grows in close correlation with alanine, succinic acid, and valine on filters and with isobutyric acid and 4-hydroxybenzoic acid in filtrate, Ba, and average wind speed. Our integrated strategy can be used to examine many biological, chemical, and environmental physical factors to analyze surface water. Copyright © 2018. Published by Elsevier B.V.

  12. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  13. Acid and alkali effects on the decomposition of HMX molecule: a computational study.

    PubMed

    Zhang, Chaoyang; Li, Yuzhen; Xiong, Ying; Wang, Xiaolin; Zhou, Mingfei

    2011-11-03

    The stored and wasted explosives are usually in an acid or alkali environment, leading to the importance of exploring the acid and alkali effects on the decomposition mechanism of explosives. The acid and alkali effects on the decomposition of HMX molecule in gaseous state and in aqueous solution at 298 K are studied using quantum chemistry and molecular force field calculations. The results show that both H(+) and OH(-) make the decomposition in gaseous state energetically favorable. However, the effect of H(+) is much different from that of OH(-) in aqueous solution: OH(-) can accelerate the decomposition but H(+) cannot. The difference is mainly caused by the large aqueous solvation energy difference between H(+) and OH(-). The results confirm that the dissociation of HMX is energetically favored only in the base solutions, in good agreement with previous HMX base hydrolysis experimental observations. The different acid and alkali effects on the HMX decomposition are dominated by the large aqueous solvation energy difference between H(+) and OH(-).

  14. A knowledge-based tool for multilevel decomposition of a complex design problem

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.

  15. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  16. Unified path integral approach to theories of diffusion-influenced reactions

    NASA Astrophysics Data System (ADS)

    Prüstel, Thorsten; Meier-Schellersheim, Martin

    2017-08-01

    Building on mathematical similarities between quantum mechanics and theories of diffusion-influenced reactions, we develop a general approach for computational modeling of diffusion-influenced reactions that is capable of capturing not only the classical Smoluchowski picture but also alternative theories, as is here exemplified by a volume reactivity model. In particular, we prove the path decomposition expansion of various Green's functions describing the irreversible and reversible reaction of an isolated pair of molecules. To this end, we exploit a connection between boundary value and interaction potential problems with δ - and δ'-function perturbation. We employ a known path-integral-based summation of a perturbation series to derive a number of exact identities relating propagators and survival probabilities satisfying different boundary conditions in a unified and systematic manner. Furthermore, we show how the path decomposition expansion represents the propagator as a product of three factors in the Laplace domain that correspond to quantities figuring prominently in stochastic spatially resolved simulation algorithms. This analysis will thus be useful for the interpretation of current and the design of future algorithms. Finally, we discuss the relation between the general approach and the theory of Brownian functionals and calculate the mean residence time for the case of irreversible and reversible reactions.

  17. A molecular dynamics study of model SI clathrate hydrates: the effect of guest size and guest-water interaction on decomposition kinetics.

    PubMed

    Das, Subhadip; Baghel, Vikesh Singh; Roy, Sudip; Kumar, Rajnish

    2015-04-14

    One of the options suggested for methane recovery from natural gas hydrates is molecular replacement of methane by suitable guests like CO2 and N2. This approach has been found to be feasible through many experimental and molecular dynamics simulation studies. However, the long term stability of the resultant hydrate needs to be evaluated; the decomposition rate of these hydrates is expected to depend on the interaction between these guest and water molecules. In this work, molecular dynamics simulation has been performed to illustrate the effect of guest molecules with different sizes and interaction strengths with water on structure I (SI) hydrate decomposition and hence the stability. The van der Waals interaction between water of hydrate cages and guest molecules is defined by Lennard Jones potential parameters. A wide range of parameter spaces has been scanned by changing the guest molecules in the SI hydrate, which acts as a model gas for occupying the small and large cages of the SI hydrate. All atomistic simulation results show that the stability of the hydrate is sensitive to the size and interaction of the guest molecules with hydrate water. The increase in the interaction of guest molecules with water stabilizes the hydrate, which in turn shows a slower rate of hydrate decomposition. Similarly guest molecules with a reasonably small (similar to Helium) or large size increase the decomposition rate. The results were also analyzed by calculating the structural order parameter to understand the dynamics of crystal structure and correlated with the release rate of guest molecules from the solid hydrate phase. The results have been explained based on the calculation of potential energies felt by guest molecules in amorphous water, hydrate bulk and hydrate-water interface regions.

  18. Chemistry of decomposition of freshwater wetland sedimentary organic material during ramped pyrolysis

    NASA Astrophysics Data System (ADS)

    Williams, E. K.; Rosenheim, B. E.

    2011-12-01

    Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.

  19. Spinodal Decomposition in Functionally Graded Super Duplex Stainless Steel and Weld Metal

    NASA Astrophysics Data System (ADS)

    Hosseini, Vahid A.; Thuvander, Mattias; Wessman, Sten; Karlsson, Leif

    2018-07-01

    Low-temperature phase separations (T < 500 °C), resulting in changes in mechanical and corrosion properties, of super duplex stainless steel (SDSS) base and weld metals were investigated for short heat treatment times (0.5 to 600 minutes). A novel heat treatment technique, where a stationary arc produces a steady state temperature gradient for selected times, was employed to fabricate functionally graded materials. Three different initial material conditions including 2507 SDSS, remelted 2507 SDSS, and 2509 SDSS weld metal were investigated. Selective etching of ferrite significantly decreased in regions heat treated at 435 °C to 480 °C already after 3 minutes due to rapid phase separations. Atom probe tomography results revealed spinodal decomposition of ferrite and precipitation of Cu particles. Microhardness mapping showed that as-welded microstructure and/or higher Ni content accelerated decomposition. The arc heat treatment technique combined with microhardness mapping and electrolytical etching was found to be a successful approach to evaluate kinetics of low-temperature phase separations in SDSS, particularly at its earlier stages. A time-temperature transformation diagram was proposed showing the kinetics of 475 °C-embrittlement in 2507 SDSS.

  20. Spinodal Decomposition in Functionally Graded Super Duplex Stainless Steel and Weld Metal

    NASA Astrophysics Data System (ADS)

    Hosseini, Vahid A.; Thuvander, Mattias; Wessman, Sten; Karlsson, Leif

    2018-04-01

    Low-temperature phase separations (T < 500 °C), resulting in changes in mechanical and corrosion properties, of super duplex stainless steel (SDSS) base and weld metals were investigated for short heat treatment times (0.5 to 600 minutes). A novel heat treatment technique, where a stationary arc produces a steady state temperature gradient for selected times, was employed to fabricate functionally graded materials. Three different initial material conditions including 2507 SDSS, remelted 2507 SDSS, and 2509 SDSS weld metal were investigated. Selective etching of ferrite significantly decreased in regions heat treated at 435 °C to 480 °C already after 3 minutes due to rapid phase separations. Atom probe tomography results revealed spinodal decomposition of ferrite and precipitation of Cu particles. Microhardness mapping showed that as-welded microstructure and/or higher Ni content accelerated decomposition. The arc heat treatment technique combined with microhardness mapping and electrolytical etching was found to be a successful approach to evaluate kinetics of low-temperature phase separations in SDSS, particularly at its earlier stages. A time-temperature transformation diagram was proposed showing the kinetics of 475 °C-embrittlement in 2507 SDSS.

  1. Spectral decomposition of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  2. A domain decomposition approach to implementing fault slip in finite-element models of quasi-static and dynamic crustal deformation

    USGS Publications Warehouse

    Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.

    2013-01-01

    We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.

  3. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  4. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  5. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  6. Study on the decomposition of trace benzene over V2O5-WO3 ...

    EPA Pesticide Factsheets

    Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet

  7. Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization

    DTIC Science & Technology

    2010-03-31

    optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested

  8. Implementing dense linear algebra algorithms using multitasking on the CRAY X-MP-4 (or approaching the gigaflop)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Hewitt, T.

    1985-08-01

    This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.

  9. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  10. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  11. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  12. A graph decomposition-based approach for water distribution network optimization

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.

    2013-04-01

    A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.

  13. Early stage litter decomposition across biomes

    Treesearch

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  14. An Efficient Approach for Pixel Decomposition to Increase the Spatial Resolution of Land Surface Temperature Images from MODIS Thermal Infrared Band Data

    PubMed Central

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2015-01-01

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world. PMID:25609048

  15. Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT

    NASA Astrophysics Data System (ADS)

    Hayden, Heather F.

    The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point of RDX. Therefore, the decomposition of GUzT affects reactions that are dominant in the liquid phase of RDX. Although GUzT is not an effective burning-rate modifier, features of its decomposition where the reaction between amines formed in the decomposition of GUzT react with the aldehydes, formed in the decomposition of RDX, may have implications from an insensitive-munitions perspective.

  16. Novel approaches to address spectral distortions in photon counting x-ray CT using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Touch, M.; Clark, D. P.; Barber, W.; Badea, C. T.

    2016-04-01

    Spectral CT using a photon-counting x-ray detector (PCXD) can potentially increase accuracy of measuring tissue composition. However, PCXD spectral measurements suffer from distortion due to charge sharing, pulse pileup, and Kescape energy loss. This study proposes two novel artificial neural network (ANN)-based algorithms: one to model and compensate for the distortion, and another one to directly correct for the distortion. The ANN-based distortion model was obtained by training to learn the distortion from a set of projections with a calibration scan. The ANN distortion was then applied in the forward statistical model to compensate for distortion in the projection decomposition. ANN was also used to learn to correct distortions directly in projections. The resulting corrected projections were used for reconstructing the image, denoising via joint bilateral filtration, and decomposition into three-material basis functions: Compton scattering, the photoelectric effect, and iodine. The ANN-based distortion model proved to be more robust to noise and worked better compared to using an imperfect parametric distortion model. In the presence of noise, the mean relative errors in iodine concentration estimation were 11.82% (ANN distortion model) and 16.72% (parametric model). With distortion correction, the mean relative error in iodine concentration estimation was improved by 50% over direct decomposition from distorted data. With our joint bilateral filtration, the resulting material image quality and iodine detectability as defined by the contrast-to-noise ratio were greatly enhanced allowing iodine concentrations as low as 2 mg/ml to be detected. Future work will be dedicated to experimental evaluation of our ANN-based methods using 3D-printed phantoms.

  17. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    PubMed

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  18. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  19. Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.

    PubMed

    Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc

    2018-08-01

    Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Vertically-oriented graphenes supported Mn3O4 as advanced catalysts in post plasma-catalysis for toluene decomposition

    NASA Astrophysics Data System (ADS)

    Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa

    2018-04-01

    This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.

  1. 3D quantitative analysis of early decomposition changes of the human face.

    PubMed

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2018-03-01

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  2. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  3. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  4. Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes

    NASA Technical Reports Server (NTRS)

    Montarnal, Philippe; Shu, Chi-Wang

    1998-01-01

    In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  5. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications.

    PubMed

    Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P

    2016-04-13

    An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).

  6. Ground cross-modal impedance as a tool for analyzing ground/plate interaction and ground wave propagation.

    PubMed

    Grau, L; Laulagnet, B

    2015-05-01

    An analytical approach is investigated to model ground-plate interaction based on modal decomposition and the two-dimensional Fourier transform. A finite rectangular plate subjected to flexural vibration is coupled with the ground and modeled with the Kirchhoff hypothesis. A Navier equation represents the stratified ground, assumed infinite in the x- and y-directions and free at the top surface. To obtain an analytical solution, modal decomposition is applied to the structure and a Fourier Transform is applied to the ground. The result is a new tool for analyzing ground-plate interaction to resolve this problem: ground cross-modal impedance. It allows quantifying the added-stiffness, added-mass, and added-damping from the ground to the structure. Similarity with the parallel acoustic problem is highlighted. A comparison between the theory and the experiment shows good matching. Finally, specific cases are investigated, notably the influence of layer depth on plate vibration.

  7. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    PubMed

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  8. Acidic attack of perfluorinated alkyl ether lubricant molecules by metal oxide surfaces

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Faut, Owen D.

    1989-01-01

    The reactions of linear perfluoropolyalkylether (PFAE) lubricants with alpha-Fe2O3 and Fe2O3-based solid superacids were studied. The reaction with alpha-Fe2O3 proceeds in two stages. The first stage is an initial slow catalytic decomposition of the fluid. This reaction releases reactive gaseous products which attach the metal oxide and convert it to FeF3. The second stage is a more rapid decomposition of the fluid, effected by the surface FeF3. A study of the inital breakdown step was performed using alpha-Fe2O3, alpha-Fe2O3 preconverted to FeF3, and sulfate-promoted alpha-Fe2O3 superacids. The results indicate that the breakdown reaction involves acidic attack at fluorine atoms on acetal carbons in the linear PFAE. Possible approaches to combat the problem are outlined.

  9. Effects of magnesium-based hydrogen storage materials on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant.

    PubMed

    Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu

    2018-01-15

    MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.

  10. Influence of Different Forest System Management Practices on Leaf Litter Decomposition Rates, Nutrient Dynamics and the Activity of Ligninolytic Enzymes: A Case Study from Central European Forests

    PubMed Central

    Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling. PMID:24699676

  11. Influence of different forest system management practices on leaf litter decomposition rates, nutrient dynamics and the activity of ligninolytic enzymes: a case study from central European forests.

    PubMed

    Purahong, Witoon; Kapturska, Danuta; Pecyna, Marek J; Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling.

  12. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  13. A variance-decomposition approach to investigating multiscale habitat associations

    USGS Publications Warehouse

    Lawler, J.J.; Edwards, T.C.

    2006-01-01

    The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.

  14. A Study of Strong Stability of Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cataltepe, Tayfun

    1989-01-01

    The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.

  15. In silico local structure approach: a case study on outer membrane proteins.

    PubMed

    Martin, Juliette; de Brevern, Alexandre G; Camproux, Anne-Claude

    2008-04-01

    The detection of Outer Membrane Proteins (OMP) in whole genomes is an actual question, their sequence characteristics have thus been intensively studied. This class of protein displays a common beta-barrel architecture, formed by adjacent antiparallel strands. However, due to the lack of available structures, few structural studies have been made on this class of proteins. Here we propose a novel OMP local structure investigation, based on a structural alphabet approach, i.e., the decomposition of 3D structures using a library of four-residue protein fragments. The optimal decomposition of structures using hidden Markov model results in a specific structural alphabet of 20 fragments, six of them dedicated to the decomposition of beta-strands. This optimal alphabet, called SA20-OMP, is analyzed in details, in terms of local structures and transitions between fragments. It highlights a particular and strong organization of beta-strands as series of regular canonical structural fragments. The comparison with alphabets learned on globular structures indicates that the internal organization of OMP structures is more constrained than in globular structures. The analysis of OMP structures using SA20-OMP reveals some recurrent structural patterns. The preferred location of fragments in the distinct regions of the membrane is investigated. The study of pairwise specificity of fragments reveals that some contacts between structural fragments in beta-sheets are clearly favored whereas others are avoided. This contact specificity is stronger in OMP than in globular structures. Moreover, SA20-OMP also captured sequential information. This can be integrated in a scoring function for structural model ranking with very promising results. (c) 2007 Wiley-Liss, Inc.

  16. Flux Analysis of Free Amino Sugars and Amino Acids in Soils by Isotope Tracing with a Novel Liquid Chromatography/High Resolution Mass Spectrometry Platform.

    PubMed

    Hu, Yuntao; Zheng, Qing; Wanek, Wolfgang

    2017-09-05

    Soil fluxomics analysis can provide pivotal information for understanding soil biochemical pathways and their regulation, but direct measurement methods are rare. Here, we describe an approach to measure soil extracellular metabolite (amino sugar and amino acid) concentrations and fluxes based on a 15 N isotope pool dilution technique via liquid chromatography and high-resolution mass spectrometry. We produced commercially unavailable 15 N and 13 C labeled amino sugars and amino acids by hydrolyzing peptidoglycan isolated from isotopically labeled bacterial biomass and used them as tracers ( 15 N) and internal standards ( 13 C). High-resolution (Orbitrap Exactive) MS with a resolution of 50 000 allowed us to separate different stable isotope labeled analogues across a large range of metabolites. The utilization of 13 C internal standards greatly improved the accuracy and reliability of absolute quantification. We successfully applied this method to two types of soils and quantified the extracellular gross fluxes of 2 amino sugars, 18 amino acids, and 4 amino acid enantiomers. Compared to the influx and efflux rates of most amino acids, similar ones were found for glucosamine, indicating that this amino sugar is released through peptidoglycan and chitin decomposition and serves as an important nitrogen source for soil microorganisms. d-Alanine and d-glutamic acid derived from peptidoglycan decomposition exhibited similar turnover rates as their l-enantiomers. This novel approach offers new strategies to advance our understanding of the production and transformation pathways of soil organic N metabolites, including the unknown contributions of peptidoglycan and chitin decomposition to soil organic N cycling.

  17. An MPI + $X$ implementation of contact global search using Kokkos

    DOE PAGES

    Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...

    2015-10-05

    This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less

  18. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  19. Teaching a New Method of Partial Fraction Decomposition to Senior Secondary Students: Results and Analysis from a Pilot Study

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong; Leung, Allen

    2012-01-01

    In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…

  20. a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar

    NASA Astrophysics Data System (ADS)

    Dehnavi, S.; Maghsoudi, Y.

    2015-12-01

    Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.

  1. Spider foraging strategy affects trophic cascades under natural and drought conditions.

    PubMed

    Liu, Shengjie; Chen, Jin; Gan, Wenjin; Schaefer, Douglas; Gan, Jianmin; Yang, Xiaodong

    2015-07-23

    Spiders can cause trophic cascades affecting litter decomposition rates. However, it remains unclear how spiders with different foraging strategies influence faunal communities, or present cascading effects on decomposition. Furthermore, increased dry periods predicted in future climates will likely have important consequences for trophic interactions in detritus-based food webs. We investigated independent and interactive effects of spider predation and drought on litter decomposition in a tropical forest floor. We manipulated densities of dominant spiders with actively hunting or sit-and-wait foraging strategies in microcosms which mimicked the tropical-forest floor. We found a positive trophic cascade on litter decomposition was triggered by actively hunting spiders under ambient rainfall, but sit-and-wait spiders did not cause this. The drought treatment reversed the effect of actively hunting spiders on litter decomposition. Under drought conditions, we observed negative trophic cascade effects on litter decomposition in all three spider treatments. Thus, reduced rainfall can alter predator-induced indirect effects on lower trophic levels and ecosystem processes, and is an example of how such changes may alter trophic cascades in detritus-based webs of tropical forests.

  2. Spider foraging strategy affects trophic cascades under natural and drought conditions

    PubMed Central

    Liu, Shengjie; Chen, Jin; Gan, Wenjin; Schaefer, Douglas; Gan, Jianmin; Yang, Xiaodong

    2015-01-01

    Spiders can cause trophic cascades affecting litter decomposition rates. However, it remains unclear how spiders with different foraging strategies influence faunal communities, or present cascading effects on decomposition. Furthermore, increased dry periods predicted in future climates will likely have important consequences for trophic interactions in detritus-based food webs. We investigated independent and interactive effects of spider predation and drought on litter decomposition in a tropical forest floor. We manipulated densities of dominant spiders with actively hunting or sit-and-wait foraging strategies in microcosms which mimicked the tropical-forest floor. We found a positive trophic cascade on litter decomposition was triggered by actively hunting spiders under ambient rainfall, but sit-and-wait spiders did not cause this. The drought treatment reversed the effect of actively hunting spiders on litter decomposition. Under drought conditions, we observed negative trophic cascade effects on litter decomposition in all three spider treatments. Thus, reduced rainfall can alter predator-induced indirect effects on lower trophic levels and ecosystem processes, and is an example of how such changes may alter trophic cascades in detritus-based webs of tropical forests. PMID:26202370

  3. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  4. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  5. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  6. Matrix- and tensor-based recommender systems for the discovery of currently unknown inorganic compounds

    NASA Astrophysics Data System (ADS)

    Seko, Atsuto; Hayashi, Hiroyuki; Kashima, Hisashi; Tanaka, Isao

    2018-01-01

    Chemically relevant compositions (CRCs) and atomic arrangements of inorganic compounds have been collected as inorganic crystal structure databases. Machine learning is a unique approach to search for currently unknown CRCs from vast candidates. Herein we propose matrix- and tensor-based recommender system approaches to predict currently unknown CRCs from database entries of CRCs. Firstly, the performance of the recommender system approaches to discover currently unknown CRCs is examined. A Tucker decomposition recommender system shows the best discovery rate of CRCs as the majority of the top 100 recommended ternary and quaternary compositions correspond to CRCs. Secondly, systematic density functional theory (DFT) calculations are performed to investigate the phase stability of the recommended compositions. The phase stability of the 27 compositions reveals that 23 currently unknown compounds are newly found to be stable. These results indicate that the recommender system has great potential to accelerate the discovery of new compounds.

  7. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  8. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  9. An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino

    2013-01-01

    Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.

  10. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  11. Design and Integration for High Performance Robotic Systems Based on Decomposition and Hybridization Approaches

    PubMed Central

    Zhang, Dan; Wei, Bin

    2017-01-01

    Currently, the uses of robotics are limited with respect to performance capabilities. Improving the performance of robotic mechanisms is and still will be the main research topic in the next decade. In this paper, design and integration for improving performance of robotic systems are achieved through three different approaches, i.e., structure synthesis design approach, dynamic balancing approach, and adaptive control approach. The purpose of robotic mechanism structure synthesis design is to propose certain mechanism that has better kinematic and dynamic performance as compared to the old ones. For the dynamic balancing design approach, it is normally accomplished based on employing counterweights or counter-rotations. The potential issue is that more weight and inertia will be included in the system. Here, reactionless based on the reconfiguration concept is put forward, which can address the mentioned problem. With the mechanism reconfiguration, the control system needs to be adapted thereafter. One way to address control system adaptation is by applying the “divide and conquer” methodology. It entails modularizing the functionalities: breaking up the control functions into small functional modules, and from those modules assembling the control system according to the changing needs of the mechanism. PMID:28075360

  12. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    NASA Astrophysics Data System (ADS)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  13. Detection and identification of concealed weapons using matrix pencil

    NASA Astrophysics Data System (ADS)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  14. Fuzzy-based decision strategy in real-time strategic games

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2017-11-01

    The aim of this article is to describe our own gaming artificial intelligence for OpenTTD, which is a real-time building strategy game. A multi-agent system with fuzzy decision-making was used for the proposal itself. The multiagent system was chosen because real-time strategy games achieve great complexity and require decomposition of the problem into individual problems, which are then solved by individual cooperating agents. The system becomes then more stable and easily expandable. The fuzzy approach makes the decision-making process of strategies easier thanks to the use of uncertainty. In the conclusion, own experimental results were compared with other approaches.

  15. Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zuchowski, Loïc; Brun, Michael; De Martin, Florent

    2018-05-01

    The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.

  16. Sensitivity based coupling strengths in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, C. L.; Sobieszczanski-Sobieski, J.

    1993-01-01

    The iterative design scheme necessary for complex engineering systems is generally time consuming and difficult to implement. Although a decomposition approach results in a more tractable problem, the inherent couplings make establishing the interdependencies of the various subsystems difficult. Another difficulty lies in identifying the most efficient order of execution for the subsystem analyses. The paper describes an approach for determining the dependencies that could be suspended during the system analysis with minimal accuracy losses, thereby reducing the system complexity. A new multidisciplinary testbed is presented, involving the interaction of structures, aerodynamics, and performance disciplines. Results are presented to demonstrate the effectiveness of the system reduction scheme.

  17. Time-dependent quantum transport: An efficient method based on Liouville-von-Neumann equation for single-electron density matrix

    NASA Astrophysics Data System (ADS)

    Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua

    2012-07-01

    Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.

  18. Nutrient-enhanced decomposition of plant biomass in a freshwater wetland

    USGS Publications Warehouse

    Bodker, James E.; Turner, Robert Eugene; Tweel, Andrew; Schulz, Christopher; Swarzenski, Christopher M.

    2015-01-01

    We studied soil decomposition in a Panicum hemitomon (Schultes)-dominated freshwater marsh located in southeastern Louisiana that was unambiguously changed by secondarily-treated municipal wastewater effluent. We used four approaches to evaluate how belowground biomass decomposition rates vary under different nutrient regimes in this marsh. The results of laboratory experiments demonstrated how nutrient enrichment enhanced the loss of soil or plant organic matter by 50%, and increased gas production. An experiment demonstrated that nitrogen, not phosphorus, limited decomposition. Cellulose decomposition at the field site was higher in the flowfield of the introduced secondarily treated sewage water, and the quality of the substrate (% N or % P) was directly related to the decomposition rates. We therefore rejected the null hypothesis that nutrient enrichment had no effect on the decomposition rates of these organic soils. In response to nutrient enrichment, plants respond through biomechanical or structural adaptations that alter the labile characteristics of plant tissue. These adaptations eventually change litter type and quality (where the marsh survives) as the % N content of plant tissue rises and is followed by even higher decomposition rates of the litter produced, creating a positive feedback loop. Marsh fragmentation will increase as a result. The assumptions and conditions underlying the use of unconstrained wastewater flow within natural wetlands, rather than controlled treatment within the confines of constructed wetlands, are revealed in the loss of previously sequestered carbon, habitat, public use, and other societal benefits.

  19. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  20. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  1. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  2. Velocity measurements of heterogeneous RBC flow in capillary vessels using dynamic laser speckle signal.

    PubMed

    Li, Chenxi; Wang, Ruikang

    2017-04-01

    We propose an approach to measure heterogeneous velocities of red blood cells (RBCs) in capillary vessels using full-field time-varying dynamic speckle signals. The approach utilizes a low coherent laser speckle imaging system to record the instantaneous speckle pattern, followed by an eigen-decomposition-based filtering algorithm to extract dynamic speckle signal due to the moving RBCs. The velocity of heterogeneous RBC flows is determined by cross-correlating the temporal dynamic speckle signals obtained at adjacent locations. We verify the approach by imaging mouse pinna in vivo, demonstrating its capability for full-field RBC flow mapping and quantifying flow pattern with high resolution. It is expected to investigate the dynamic action of RBCs flow in capillaries under physiological changes.

  3. Health monitoring of pipeline girth weld using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Rezaei, Davood; Taheri, Farid

    2010-05-01

    In the present paper the Hilbert-Huang transform (HHT), as a time-series analysis technique, has been combined with a local diagnostic approach in an effort to identify flaws in pipeline girth welds. This method is based on monitoring the free vibration signals of the pipe at its healthy and flawed states, and processing the signals through the HHT and its associated signal decomposition technique, known as empirical mode decomposition (EMD). The EMD method decomposes the vibration signals into a collection of intrinsic mode functions (IMFs). The deviations in structural integrity, measured from a healthy-state baseline, are subsequently evaluated by two damage sensitive parameters. The first is a damage index, referred to as the EM-EDI, which is established based on an energy comparison of the first or second IMF of the vibration signals, before and after occurrence of damage. The second parameter is the evaluation of the lag in instantaneous phase, a quantity derived from the HHT. In the developed methodologies, the pipe's free vibration is monitored by piezoceramic sensors and a laser Doppler vibrometer. The effectiveness of the proposed techniques is demonstrated through a set of numerical and experimental studies on a steel pipe with a mid-span girth weld, for both pressurized and nonpressurized conditions. To simulate a crack, a narrow notch is cut on one side of the girth weld. Several damage scenarios, including notches of different depths and at various locations on the pipe, are investigated. Results from both numerical and experimental studies reveal that in all damage cases the sensor located at the notch vicinity could successfully detect the notch and qualitatively predict its severity. The effect of internal pressure on the damage identification method is also monitored. Overall, the results are encouraging and promise the effectiveness of the proposed approaches as inexpensive systems for structural health monitoring purposes.

  4. A comparison between decomposition rates of buried and surface remains in a temperate region of South Africa.

    PubMed

    Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M

    2018-01-01

    Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.

  5. A trait-based approach to community assembly: partitioning of species trait values into within- and among-community components.

    PubMed

    Ackerly, D D; Cornwell, W K

    2007-02-01

    Plant functional traits vary both along environmental gradients and among species occupying similar conditions, creating a challenge for the synthesis of functional and community ecology. We present a trait-based approach that provides an additive decomposition of species' trait values into alpha and beta components: beta values refer to a species' position along a gradient defined by community-level mean trait values; alpha values are the difference between a species' trait values and the mean of co-occurring taxa. In woody plant communities of coastal California, beta trait values for specific leaf area, leaf size, wood density and maximum height all covary strongly, reflecting species distributions across a gradient of soil moisture availability. Alpha values, on the other hand, are generally not significantly correlated, suggesting several independent axes of differentiation within communities. This trait-based framework provides a novel approach to integrate functional ecology and gradient analysis with community ecology and coexistence theory.

  6. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  7. A reduced basis approach for implementing thermodynamic phase-equilibria information in geophysical and geodynamic studies

    NASA Astrophysics Data System (ADS)

    Afonso, J. C.; Zlotnik, S.; Diez, P.

    2015-12-01

    We present a flexible, general and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on multi-dimensional decomposition methods, which transform the original multi-dimensional discrete information into a dimensional-separated representation. This representation has the property of increasing the number of coefficients to be stored linearly with the number of dimensions (opposite to a full multi-dimensional cube requiring exponential storage depending on the number of dimensions). Thus, the amount of information to be stored in memory during a numerical simulation or geophysical inversion is drastically reduced. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g. preliminary runs vs full resolution runs). We illustrate the benefits, generality and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies.

  8. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  9. Testing the monogamy relations via rank-2 mixtures

    NASA Astrophysics Data System (ADS)

    Jung, Eylee; Park, DaeKil

    2016-10-01

    We introduce two tangle-based four-party entanglement measures t1 and t2, and two negativity-based measures n1 and n2, which are derived from the monogamy relations. These measures are computed for three four-qubit maximally entangled and W states explicitly. We also compute these measures for the rank-2 mixture ρ4=p | GHZ4>< GHZ4|+(1 -p ) | W4>< W4| by finding the corresponding optimal decompositions. It turns out that t1(ρ4) is trivial and the corresponding optimal decomposition is equal to the spectral decomposition. Probably, this triviality is a sign of the fact that the corresponding monogamy inequality is not sufficiently tight. We fail to compute t2(ρ4) due to the difficulty in the calculation of the residual entanglement. The negativity-based measures n1(ρ4) and n2(ρ4) are explicitly computed and the corresponding optimal decompositions are also derived explicitly.

  10. Geometrical eigen-subspace framework based molecular conformation representation for efficient structure recognition and comparison

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Tian; Yang, Xiao-Bao; Zhao, Yu-Jun

    2017-04-01

    We have developed an extended distance matrix approach to study the molecular geometric configuration through spectral decomposition. It is shown that the positions of all atoms in the eigen-space can be specified precisely by their eigen-coordinates, while the refined atomic eigen-subspace projection array adopted in our approach is demonstrated to be a competent invariant in structure comparison. Furthermore, a visual eigen-subspace projection function (EPF) is derived to characterize the surrounding configuration of an atom naturally. A complete set of atomic EPFs constitute an intrinsic representation of molecular conformation, based on which the interatomic EPF distance and intermolecular EPF distance can be reasonably defined. Exemplified with a few cases, the intermolecular EPF distance shows exceptional rationality and efficiency in structure recognition and comparison.

  11. Effect of urea additive on the thermal decomposition kinetics of flame retardant greige cotton nonwoven fabric

    Treesearch

    Sunghyun Nam; Brian D. Condon; Robert H. White; Qi Zhao; Fei Yao; Michael Santiago Cintrón

    2012-01-01

    Urea is well known to have a synergistic action with phosphorus-based flame retardants (FRs) in enhancing the FR performance of cellulosic materials, but the effect of urea on the thermal decomposition kinetics has not been thoroughly studied. In this study, the activation energy (Ea) for the thermal decomposition of greige...

  12. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.

  13. Superfast algorithms of multidimensional discrete k-wave transforms and Volterra filtering based on superfast radon transform

    NASA Astrophysics Data System (ADS)

    Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.

    2001-12-01

    Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.

  14. Decomposition of Potassium Ferrate(VI) (K2FeO4) and Potassium Ferrate(III) (KFeO2): In-situ Mössbauer Spectroscopy Approach

    NASA Astrophysics Data System (ADS)

    Machala, Libor; Zboril, Radek; Sharma, Virender K.; Homonnay, Zoltan

    2008-10-01

    Mössbauer spectroscopy was shown to be very useful technique studying the mechanism of thermal decomposition or aging processes of the most known ferrate(VI), K2FeO4. In-situ Mössbauer spectroscopy approach was used to monitor the phase composition during the studied processes. The experimental set-up was designed to perform in-situ measurements at high temperatures and at different air humid conditions at room temperature. The potassium ferrate(III), KFeO2 was demonstrated to be the primary product of thermal decomposition of K2FeO4. The KFeO2 was unstable in a humid air at room temperature and reacted with components of air, H2O and CO2 to give Fe2O3 nanoparticles and KHCO3. The aging kinetics of K2FeO4 and KFeO2 under humid air were significantly dependent on the relative air humidity.

  15. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  16. An effective hierarchical model for the biomolecular covalent bond: an approach integrating artificial chemistry and an actual terrestrial life system.

    PubMed

    Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu

    2009-01-01

    Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.

  17. Young Children's Thinking About Decomposition: Early Modeling Entrees to Complex Ideas in Science

    NASA Astrophysics Data System (ADS)

    Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona

    2013-10-01

    This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included in-class observations of different types of soil and soil profiling, visits to the school's compost bin, structured observations of decaying organic matter of various kinds, study of organisms that live in the soil, and models of environmental conditions that affect rates of decomposition. Both before and after instruction, students completed a written performance assessment that asked them to reason about the process of decomposition. Additional information was gathered through one-on-one interviews with six focus students who represented variability of performance across the class. During instruction, researchers collected video of classroom activity, student science journal entries, and charts and illustrations produced by the teacher. After instruction, the first-grade students showed a more nuanced understanding of the composition and variability of soils, the role of visible organisms in decomposition, and environmental factors that influence rates of decomposition. Through a variety of representational devices, including drawings, narrative records, and physical models, students came to regard decomposition as a process, rather than simply as an end state that does not require explanation.

  18. Video rate morphological processor based on a redundant number representation

    NASA Astrophysics Data System (ADS)

    Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.

    1992-03-01

    This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.

  19. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  20. Ozone time scale decomposition and trend assessment from surface observations

    NASA Astrophysics Data System (ADS)

    Boleti, Eirini; Hueglin, Christoph; Takahama, Satoshi

    2017-04-01

    Emissions of ozone precursors have been regulated in Europe since around 1990 with control measures primarily targeting to industries and traffic. In order to understand how these measures have affected air quality, it is now important to investigate concentrations of tropospheric ozone in different types of environments, based on their NOx burden, and in different geographic regions. In this study, we analyze high quality data sets for Switzerland (NABEL network) and whole Europe (AirBase) for the last 25 years to calculate long-term trends of ozone concentrations. A sophisticated time scale decomposition method, called the Ensemble Empirical Mode Decomposition (EEMD) (Huang,1998;Wu,2009), is used for decomposition of the different time scales of the variation of ozone, namely the long-term trend, seasonal and short-term variability. This allows subtraction of the seasonal pattern of ozone from the observations and estimation of long-term changes of ozone concentrations with lower uncertainty ranges compared to typical methodologies used. We observe that, despite the implementation of regulations, for most of the measurement sites ozone daily mean values have been increasing until around mid-2000s. Afterwards, we observe a decline or a leveling off in the concentrations; certainly a late effect of limitations in ozone precursor emissions. On the other hand, the peak ozone concentrations have been decreasing for almost all regions. The evolution in the trend exhibits some differences between the different types of measurement. In addition, ozone is known to be strongly affected by meteorology. In the applied approach, some of the meteorological effects are already captured by the seasonal signal and already removed in the de-seasonalized ozone time series. For adjustment of the influence of meteorology on the higher frequency ozone variation, a statistical approach based on Generalized Additive Models (GAM) (Hastie,1990;Wood,2006), which corrects for meteorological effects, has been developed in order to a) investigate if trends are masked by meteorological variability and b) to understand which part of the observed trends is meteorology driven. By correlating short-term variation of ozone, as obtained from the EEMD, with the corresponding short-term variation of relevant meteorological parameters, we subtract the variation of ozone concentrations that is related to the meteorological effects explained by the GAM. We find that higher frequency meteorological correction reduces further the uncertainty in trend estimation by a small factor. In addition, the seasonal variability of ozone as obtained from the EEMD has been studied in more detail for possible changes in its behavior. A shortening of the seasonal cycle was observed, i.e. reduction of maximum and in-crease of minimum concentration per year, while the occurrence of maximum is shifted to earlier times during a year. In summary, we present a sophisticated and consistent approach for detecting and categorizing trends and meteorological influences on ozone concentrations in long-term measurements across Europe.

  1. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  2. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  3. Improving EMG based classification of basic hand movements using EMD.

    PubMed

    Sapsanis, Christos; Georgoulas, George; Tzes, Anthony; Lymberopoulos, Dimitrios

    2013-01-01

    This paper presents a pattern recognition approach for the identification of basic hand movements using surface electromyographic (EMG) data. The EMG signal is decomposed using Empirical Mode Decomposition (EMD) into Intrinsic Mode Functions (IMFs) and subsequently a feature extraction stage takes place. Various combinations of feature subsets are tested using a simple linear classifier for the detection task. Our results suggest that the use of EMD can increase the discrimination ability of the conventional feature sets extracted from the raw EMG signal.

  4. [The segmentation of urinary cells--a first step in the automated processing in urine cytology (author's transl)].

    PubMed

    Liedtke, C E; Aeikens, B

    1980-01-01

    By segmentation of cell images we understand the automated decomposition of microscopic cell scenes into nucleus, plasma and background. A segmentation is achieved by using information from the microscope image and prior knowledge about the content of the scene. Different algorithms have been investigated and applied to samples of urothelial cells. A particular algorithm based on a histogram approach which can be easily implemented in hardware is discussed in more detail.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  6. Modal analysis of 2-D sedimentary basin from frequency domain decomposition of ambient vibration array recordings

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat

    2015-01-01

    Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.

  7. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  8. Separating and Recycling Plastic, Glass, and Gallium from Waste Solar Cell Modules by Nitrogen Pyrolysis and Vacuum Decomposition.

    PubMed

    Zhang, Lingen; Xu, Zhenming

    2016-09-06

    Many countries have gained benefits through the solar cells industry due to its high efficiency and nonpolluting power generation associated with solar energy. Accordingly, the market of solar cell modules is expanding rapidly in recent decade. However, how to environmentally friendly and effectively recycle waste solar cell modules is seldom concerned. Based on nitrogen pyrolysis and vacuum decomposition, this work can successfully recycle useful organic components, glass, and gallium from solar cell modules. The results were summarized as follows: (i) nitrogen pyrolysis process can effectively decompose plastic. Organic conversion rate approached 100% in the condition of 773 K, 30 min, and 0.5 L/min N2 flow rate. But, it should be noted that pyrolysis temperature should not exceed 773 K, and harmful products would be increased with the increasing of temperature, such as benzene and its derivatives by GC-MS measurement; (ii) separation principle, products analysis, and optimization of vacuum decomposition were discussed. Gallium can be well recycled under temperature of 1123 K, system pressure of 1 Pa and reaction time of 40 min. This technology is quite significant in accordance with the "Reduce, Reuse, and Recycle Principle" for solid waste, and provides an opportunity for sustainable development of photovoltaic industry.

  9. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  10. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    PubMed

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with <1 year of experience, demonstrated TBS statistically significantly different than the on-site value, suggesting that with experience, observations of human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  11. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  12. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  13. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.

  14. Spectral response model for a multibin photon-counting spectral computed tomography detector and its applications.

    PubMed

    Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben

    2015-07-01

    Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.

  15. Spectral response model for a multibin photon-counting spectral computed tomography detector and its applications

    PubMed Central

    Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben

    2015-01-01

    Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904

  16. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  17. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities.

    PubMed

    Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.

  18. Generalized Cahn-Hilliard equation for solutions with drastically different diffusion coefficients. Application to exsolution in ternary feldspar

    NASA Astrophysics Data System (ADS)

    Petrishcheva, E.; Abart, R.

    2012-04-01

    We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.

  19. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level.

    PubMed

    Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan

    2016-07-27

    This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.

  20. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  1. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  2. Isoconversional approach for non-isothermal decomposition of un-irradiated and photon-irradiated 5-fluorouracil.

    PubMed

    Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M

    2017-10-25

    Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.

  3. Exploring Patterns of Soil Organic Matter Decomposition with Students through the Global Decomposition Project (GDP) and the Interactive Model of Leaf Decomposition (IMOLD)

    NASA Astrophysics Data System (ADS)

    Steiner, S. M.; Wood, J. H.

    2015-12-01

    As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.

  4. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  5. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  6. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    PubMed

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  7. Assessing the effect of different treatments on decomposition rate of dairy manure.

    PubMed

    Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O

    2016-11-01

    Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Wash Bottle Laboratory Exercises: Iodide-Catalyzed H[subscript 2]O[subscript 2] Decomposition Reaction Kinetics Using the Initial Rate Approach

    ERIC Educational Resources Information Center

    Barlag, Rebecca; Nyasulu, Frazier

    2010-01-01

    A wash bottle water displacement scheme is used to determine the kinetics of the iodide-catalyzed H[subscript 2]O[subscript 2] decomposition reaction. The reagents (total volume 5.00 mL) are added to a test tube that is placed in a wash bottle containing water. The mass of the water displaced in [approximately]60 s is measured. The reaction is…

  9. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  10. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  11. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  12. Flux Analysis of Free Amino Sugars and Amino Acids in Soils by Isotope Tracing with a Novel Liquid Chromatography/High Resolution Mass Spectrometry Platform

    PubMed Central

    2017-01-01

    Soil fluxomics analysis can provide pivotal information for understanding soil biochemical pathways and their regulation, but direct measurement methods are rare. Here, we describe an approach to measure soil extracellular metabolite (amino sugar and amino acid) concentrations and fluxes based on a 15N isotope pool dilution technique via liquid chromatography and high-resolution mass spectrometry. We produced commercially unavailable 15N and 13C labeled amino sugars and amino acids by hydrolyzing peptidoglycan isolated from isotopically labeled bacterial biomass and used them as tracers (15N) and internal standards (13C). High-resolution (Orbitrap Exactive) MS with a resolution of 50 000 allowed us to separate different stable isotope labeled analogues across a large range of metabolites. The utilization of 13C internal standards greatly improved the accuracy and reliability of absolute quantification. We successfully applied this method to two types of soils and quantified the extracellular gross fluxes of 2 amino sugars, 18 amino acids, and 4 amino acid enantiomers. Compared to the influx and efflux rates of most amino acids, similar ones were found for glucosamine, indicating that this amino sugar is released through peptidoglycan and chitin decomposition and serves as an important nitrogen source for soil microorganisms. d-Alanine and d-glutamic acid derived from peptidoglycan decomposition exhibited similar turnover rates as their l-enantiomers. This novel approach offers new strategies to advance our understanding of the production and transformation pathways of soil organic N metabolites, including the unknown contributions of peptidoglycan and chitin decomposition to soil organic N cycling. PMID:28776982

  13. Hybrid Upwind Splitting (HUS) by a Field-by-Field Decomposition

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1995-01-01

    We introduce and develop a new approach for upwind biasing: the hybrid upwind splitting (HUS) method. This original procedure is based on a suitable hybridization of current prominent flux vector splitting (FVS) and flux difference splitting (FDS) methods. The HUS method is designed to naturally combine the respective strengths of the above methods while excluding their main deficiencies. Specifically, the HUS strategy yields a family of upwind methods that exhibit the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the resolution of linear waves. We give a detailed construction of the HUS methods following a general and systematic procedure directly performed at the basic level of the field by field (i.e. waves) decomposition involved in FDS methods. For such a given decomposition, each field is endowed either with FVS or FDS numerical fluxes, depending on the nonlinear nature of the field under consideration. Such a design principle is made possible thanks to the introduction of a convenient formalism that provides us with a unified framework for upwind methods. The HUS methods we propose bring significant improvements over current methods in terms of accuracy and robustness. They yield entropy-satisfying approximate solutions as they are strongly supported in numerical experiments. Field by field hybrid numerical fluxes also achieve fairly simple and explicit expressions and hence require a computational effort between that of the FVS and FDS. Several numerical experiments ranging from stiff 1D shock-tube to high speed viscous flows problems are displayed, intending to illustrate the benefits of the present approach. We assess in particular the relevance of our HUS schemes to viscous flow calculations.

  14. Adaptability in linkage of soil carbon nutrient cycles - the SEAM model

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas; Zaehle, Sönke; Schrumpf, Marion; Ahrens, Bernhard; Reichstein, Markus

    2017-04-01

    In order to understand the coupling of carbon (C) and nitrogen (N) cycles, it is necessary to understand C and N-use efficiencies of microbial soil organic matter (SOM) decomposition. While important controls of those efficiencies by microbial community adaptations have been shown at the scale of a soil pore, an abstract simplified representation of community adaptations is needed at ecosystem scale. Therefore we developed the soil enzyme allocation model (SEAM), which takes a holistic, partly optimality based approach to describe C and N dynamics at the spatial scale of an ecosystem and time-scales of years and longer. We explicitly modelled community adaptation strategies of resource allocation to extracellular enzymes and enzyme limitations on SOM decomposition. Using SEAM, we explored whether alternative strategy-hypotheses can have strong effects on SOM and inorganic N cycling. Results from prototypical simulations and a calibration to observations of an intensive pasture site showed that the so-called revenue enzyme allocation strategy was most viable. This strategy accounts for microbial adaptations to both, stoichiometry and amount of different SOM resources, and supported the largest microbial biomass under a wide range of conditions. Predictions of the SEAM model were qualitatively similar to models explicitly representing competing microbial groups. With adaptive enzyme allocation under conditions of high C/N ratio of litter inputs, N in formerly locked in slowly degrading SOM pools was made accessible, whereas with high N inputs, N was sequestered in SOM and protected from leaching. The finding that adaptation in enzyme allocation changes C and N-use efficiencies of SOM decomposition implies that concepts of C-nutrient cycle interactions should take account for the effects of such adaptations. This can be done using a holistic optimality approach.

  15. Do soil organisms affect aboveground litter decomposition in the semiarid Patagonian steppe, Argentina?

    PubMed

    Araujo, Patricia I; Yahdjian, Laura; Austin, Amy T

    2012-01-01

    Surface litter decomposition in arid and semiarid ecosystems is often faster than predicted by climatic parameters such as annual precipitation or evapotranspiration, or based on standard indices of litter quality such as lignin or nitrogen concentrations. Abiotic photodegradation has been demonstrated to be an important factor controlling aboveground litter decomposition in aridland ecosystems, but soil fauna, particularly macrofauna such as termites and ants, have also been identified as key players affecting litter mass loss in warm deserts. Our objective was to quantify the importance of soil organisms on surface litter decomposition in the Patagonian steppe in the absence of photodegradative effects, to establish the relative importance of soil organisms on rates of mass loss and nitrogen release. We estimated the relative contribution of soil fauna and microbes to litter decomposition of a dominant grass using litterboxes with variable mesh sizes that excluded groups of soil fauna based on size class (10, 2, and 0.01 mm), which were placed beneath shrub canopies. We also employed chemical repellents (naphthalene and fungicide). The exclusion of macro- and mesofauna had no effect on litter mass loss over 3 years (P = 0.36), as litter decomposition was similar in all soil fauna exclusions and naphthalene-treated litter. In contrast, reduction of fungal activity significantly inhibited litter decomposition (P < 0.001). Although soil fauna have been mentioned as a key control of litter decomposition in warm deserts, biogeographic legacies and temperature limitation may constrain the importance of these organisms in temperate aridlands, particularly in the southern hemisphere.

  16. Application of Petri net based analysis techniques to signal transduction pathways.

    PubMed

    Sackmann, Andrea; Heiner, Monika; Koch, Ina

    2006-11-02

    Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules.

  17. Application of Petri net based analysis techniques to signal transduction pathways

    PubMed Central

    Sackmann, Andrea; Heiner, Monika; Koch, Ina

    2006-01-01

    Background Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. Methods We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. Results We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. Conclusion The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules. PMID:17081284

  18. Velocity measurements of heterogeneous RBC flow in capillary vessels using dynamic laser speckle signal

    PubMed Central

    Li, Chenxi; Wang, Ruikang

    2017-01-01

    Abstract. We propose an approach to measure heterogeneous velocities of red blood cells (RBCs) in capillary vessels using full-field time-varying dynamic speckle signals. The approach utilizes a low coherent laser speckle imaging system to record the instantaneous speckle pattern, followed by an eigen-decomposition-based filtering algorithm to extract dynamic speckle signal due to the moving RBCs. The velocity of heterogeneous RBC flows is determined by cross-correlating the temporal dynamic speckle signals obtained at adjacent locations. We verify the approach by imaging mouse pinna in vivo, demonstrating its capability for full-field RBC flow mapping and quantifying flow pattern with high resolution. It is expected to investigate the dynamic action of RBCs flow in capillaries under physiological changes. PMID:28384709

  19. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  20. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

Top