Sample records for additively decomposable problems

  1. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  2. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  3. A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.

  4. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  5. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  6. Solution of the determinantal assignment problem using the Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Karcanias, Nicos; Leventides, John

    2016-02-01

    The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.

  7. The complexity of divisibility.

    PubMed

    Bausch, Johannes; Cubitt, Toby

    2016-09-01

    We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.

  8. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  9. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  10. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  11. The reactive bed plasma system for contamination control

    NASA Technical Reports Server (NTRS)

    Birmingham, Joseph G.; Moore, Robert R.; Perry, Tony R.

    1990-01-01

    The contamination control capabilities of the Reactive Bed Plasma (RBP) system is described by delineating the results of toxic chemical composition studies, aerosol filtration work, and other testing. The RBP system has demonstrated its capabilities to decompose toxic materials and process hazardous aerosols. The post-treatment requirements for the reaction products have possible solutions. Although additional work is required to meet NASA requirements, the RBP may be able to meet contamination control problems aboard the Space Station.

  12. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  13. Decomposing intuitive components in a conceptual problem solving task.

    PubMed

    Reber, Rolf; Ruch-Monachon, Marie-Antoinette; Perrig, Walter J

    2007-06-01

    Research into intuitive problem solving has shown that objective closeness of participants' hypotheses were closer to the accurate solution than their subjective ratings of closeness. After separating conceptually intuitive problem solving from the solutions of rational incremental tasks and of sudden insight tasks, we replicated this finding by using more precise measures in a conceptual problem-solving task. In a second study, we distinguished performance level, processing style, implicit knowledge and subjective feeling of closeness to the solution within the problem-solving task and examined the relationships of these different components with measures of intelligence and personality. Verbal intelligence correlated with performance level in problem solving, but not with processing style and implicit knowledge. Faith in intuition, openness to experience, and conscientiousness correlated with processing style, but not with implicit knowledge. These findings suggest that one needs to decompose processing style and intuitive components in problem solving to make predictions on effects of intelligence and personality measures.

  14. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  15. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  16. Reduced Toxicity Fuel Satellite Propulsion System

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  17. Reduced Toxicity Fuel Satellite Propulsion System Including Plasmatron

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2003-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster. whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  18. Linkages between below and aboveground communities: Decomposer responses to simulated tree species loss are largely additive.

    Treesearch

    Becky A. Ball; Mark A. Bradford; Dave C. Coleman; Mark D. Hunter

    2009-01-01

    Inputs of aboveground plant litter influence the abundance and activities of belowground decomposer biota. Litter-mixing studies have examined whether the diversity and heterogeneity of litter inputs...

  19. Using Volunteer Computing to Study Some Features of Diagonal Latin Squares

    NASA Astrophysics Data System (ADS)

    Vatutin, Eduard; Zaikin, Oleg; Kochemazov, Stepan; Valyaev, Sergey

    2017-12-01

    In this research, the study concerns around several features of diagonal Latin squares (DLSs) of small order. Authors of the study suggest an algorithm for computing minimal and maximal numbers of transversals of DLSs. According to this algorithm, all DLSs of a particular order are generated, and for each square all its transversals and diagonal transversals are constructed. The algorithm was implemented and applied to DLSs of order at most 7 on a personal computer. The experiment for order 8 was performed in the volunteer computing project Gerasim@home. In addition, the problem of finding pairs of orthogonal DLSs of order 10 was considered and reduced to Boolean satisfiability problem. The obtained problem turned out to be very hard, therefore it was decomposed into a family of subproblems. In order to solve the problem, the volunteer computing project SAT@home was used. As a result, several dozen pairs of described kind were found.

  20. Reduced Toxicity Fuel Satellite Propulsion System Including Fuel Cell Reformer with Alcohols Such as Methanol

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  1. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  2. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  3. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  4. Sulfate minerals: a problem for the detection of organic compounds on Mars?

    PubMed

    Lewis, James M T; Watson, Jonathan S; Najorka, Jens; Luong, Duy; Sephton, Mark A

    2015-03-01

    The search for in situ organic matter on Mars involves encounters with minerals and requires an understanding of their influence on lander and rover experiments. Inorganic host materials can be helpful by aiding the preservation of organic compounds or unhelpful by causing the destruction of organic matter during thermal extraction steps. Perchlorates are recognized as confounding minerals for thermal degradation studies. On heating, perchlorates can decompose to produce oxygen, which then oxidizes organic matter. Other common minerals on Mars, such as sulfates, may also produce oxygen upon thermal decay, presenting an additional complication. Different sulfate species decompose within a large range of temperatures. We performed a series of experiments on a sample containing the ferric sulfate jarosite. The sulfate ions within jarosite break down from 500 °C. Carbon dioxide detected during heating of the sample was attributed to oxidation of organic matter. A laboratory standard of ferric sulfate hydrate released sulfur dioxide from 550 °C, and an oxygen peak was detected in the products. Calcium sulfate did not decompose below 1000 °C. Oxygen released from sulfate minerals may have already affected organic compound detection during in situ thermal experiments on Mars missions. A combination of preliminary mineralogical analyses and suitably selected pyrolysis temperatures may increase future success in the search for past or present life on Mars.

  5. An experimental study of postmortem decomposition of methomyl in blood.

    PubMed

    Kawakami, Yuka; Fuke, Chiaki; Fukasawa, Maki; Ninomiya, Kenji; Ihama, Yoko; Miyazaki, Tetsuji

    2017-03-01

    Methomyl (S-methyl-1-N-[(methylcarbamoyl)oxy]thioacetimidate) is a carbamate pesticide. It has been noted that in some cases of methomyl poisoning, methomyl is either not detected or detected only in low concentrations in the blood of the victims. However, in such cases, methomyl is detected at higher concentrations in the vitreous humor than in the blood. This indicates that methomyl in the blood is possibly decomposed after death. However, the reasons for this phenomenon have been unclear. We have previously reported that methomyl is decomposed to dimethyl disulfide (DMDS) in the livers and kidneys of pigs but not in their blood. In addition, in the field of forensic toxicology, it is known that some compounds are decomposed or produced by internal bacteria in biological samples after death. This indicates that there is a possibility that methomyl in blood may be decomposed by bacteria after death. The aim of this study was therefore to investigate whether methomyl in blood is decomposed by bacteria isolated from human stool. Our findings demonstrated that methomyl was decomposed in human stool homogenates, resulting in the generation of DMDS. In addition, it was observed that three bacterial species isolated from the stool homogenates, Bacillus cereus, Pseudomonas aeruginosa, and Bacillus sp., showed methomyl-decomposing activity. The results therefore indicated that one reason for the difficulty in detecting methomyl in postmortem blood from methomyl-poisoning victims is the decomposition of methomyl by internal bacteria such as B. cereus, P. aeruginosa, and Bacillus sp. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Forensic entomology of decomposing humans and their decomposing pets.

    PubMed

    Sanford, Michelle R

    2015-02-01

    Domestic pets are commonly found in the homes of decedents whose deaths are investigated by a medical examiner or coroner. When these pets become trapped with a decomposing decedent they may resort to feeding on the body or succumb to starvation and/or dehydration and begin to decompose as well. In this case report photographic documentation of cases involving pets and decedents were examined from 2009 through the beginning of 2014. This photo review indicated that in many cases the pets were cats and dogs that were trapped with the decedent, died and were discovered in a moderate (bloat to active decay) state of decomposition. In addition three cases involving decomposing humans and their decomposing pets are described as they were processed for time of insect colonization by forensic entomological approach. Differences in timing and species colonizing the human and animal bodies were noted as was the potential for the human or animal derived specimens to contaminate one another at the scene. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. C, N and P fertilization in an Amazonian rainforest supports stoichiometric dissimilarity as a driver of litter diversity effects on decomposition

    PubMed Central

    Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan

    2014-01-01

    Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. PMID:25320173

  8. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  9. Hydrogen production by the decomposition of water

    DOEpatents

    Hollabaugh, Charles M.; Bowman, Melvin G.

    1981-01-01

    How to produce hydrogen from water was a problem addressed by this invention. The solution employs a combined electrolytical-thermochemical sulfuric acid process. Additionally, high purity sulfuric acid can be produced in the process. Water and SO.sub.2 react in electrolyzer (12) so that hydrogen is produced at the cathode and sulfuric acid is produced at the anode. Then the sulfuric acid is reacted with a particular compound M.sub.r X.sub.s so as to form at least one water insoluble sulfate and at least one water insoluble oxide of molybdenum, tungsten, or boron. Water is removed by filtration; and the sulfate is decomposed in the presence of the oxide in sulfate decomposition zone (21), thus forming SO.sub.3 and reforming M.sub.r X.sub.s. The M.sub.r X.sub.s is recycled to sulfate formation zone (16). If desired, the SO.sub.3 can be decomposed to SO.sub.2 and O.sub.2 ; and the SO.sub.2 can be recycled to electrolyzer (12) to provide a cycle for producing hydrogen.

  10. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  11. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  12. Sulfate Minerals: A Problem for the Detection of Organic Compounds on Mars?

    PubMed Central

    Watson, Jonathan S.; Najorka, Jens; Luong, Duy; Sephton, Mark A.

    2015-01-01

    Abstract The search for in situ organic matter on Mars involves encounters with minerals and requires an understanding of their influence on lander and rover experiments. Inorganic host materials can be helpful by aiding the preservation of organic compounds or unhelpful by causing the destruction of organic matter during thermal extraction steps. Perchlorates are recognized as confounding minerals for thermal degradation studies. On heating, perchlorates can decompose to produce oxygen, which then oxidizes organic matter. Other common minerals on Mars, such as sulfates, may also produce oxygen upon thermal decay, presenting an additional complication. Different sulfate species decompose within a large range of temperatures. We performed a series of experiments on a sample containing the ferric sulfate jarosite. The sulfate ions within jarosite break down from 500°C. Carbon dioxide detected during heating of the sample was attributed to oxidation of organic matter. A laboratory standard of ferric sulfate hydrate released sulfur dioxide from 550°C, and an oxygen peak was detected in the products. Calcium sulfate did not decompose below 1000°C. Oxygen released from sulfate minerals may have already affected organic compound detection during in situ thermal experiments on Mars missions. A combination of preliminary mineralogical analyses and suitably selected pyrolysis temperatures may increase future success in the search for past or present life on Mars. Key Words: Mars—Life detection—Geochemistry—Organic matter—Jarosite. Astrobiology 15, 247–258. PMID:25695727

  13. The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.

    PubMed

    Narayanamoorthy, S; Kalyani, S

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  14. C, N and P fertilization in an Amazonian rainforest supports stoichiometric dissimilarity as a driver of litter diversity effects on decomposition.

    PubMed

    Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan

    2014-12-07

    Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  16. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  17. Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control

    NASA Technical Reports Server (NTRS)

    Bernstein, Daniel S.; Zilberstein, Shlomo

    2003-01-01

    Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.

  18. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    PubMed Central

    Narayanamoorthy, S.; Kalyani, S.

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713

  19. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  20. Tracking Time Evolution of Collective Attention Clusters in Twitter: Time Evolving Nonnegative Matrix Factorisation.

    PubMed

    Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki

    2015-01-01

    Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.

  1. Cat got your tongue? Using the tip-of-the-tongue state to investigate fixed expressions.

    PubMed

    Nordmann, Emily; Cleland, Alexandra A; Bull, Rebecca

    2013-01-01

    Despite the fact that they play a prominent role in everyday speech, the representation and processing of fixed expressions during language production is poorly understood. Here, we report a study investigating the processes underlying fixed expression production. "Tip-of-the-tongue" (TOT) states were elicited for well-known idioms (e.g., hit the nail on the head) and participants were asked to report any information they could regarding the content of the phrase. Participants were able to correctly report individual words for idioms that they could not produce. In addition, participants produced both figurative (e.g., pretty for easy on the eye) and literal errors (e.g., hammer for hit the nail on the head) when in a TOT state, suggesting that both figurative and literal meanings are active during production. There was no effect of semantic decomposability on overall TOT incidence; however, participants recalled a greater proportion of words for decomposable rather than non-decomposable idioms. This finding suggests there may be differences in how decomposable and non-decomposable idioms are retrieved during production. Copyright © 2013 Cognitive Science Society, Inc.

  2. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  3. Telechelic Poly(2-oxazoline)s with a biocidal and a polymerizable terminal as collagenase inhibiting additive for long-term active antimicrobial dental materials

    PubMed Central

    Fik, Christoph P.; Konieczny, Stefan; Pashley, David H.; Waschinski, Christian J.; Ladisch, Reinhild S.; Salz, Ulrich; Bock, Thorsten; Tiller, Joerg C.

    2015-01-01

    Although modern dental repair materials show excellent mechanical and adhesion properties, they still face two major problems: First, any microbes that remain alive below the composite fillings actively decompose dentin and thus, subsequently cause secondary caries. Second, even if those microbes are killed, the extracellular proteases such as MMP, remain active and can still degrade collagenousdental tissue. In order to address both problems, a poly(2-methyloxazoline) with a biocidal quaternary ammonium and a polymerizable methacrylate terminal was explored as additive for a commercial dental adhesive. It could be demonstrated that the adhesive rendered the adhesive contact-active antimicrobial against S. mutans at a concentration of only 2.5 wt% and even constant washing with water for 101 days did not diminish this effect. Increasing the amount of the additive to 5 wt% allowed killing S. mutans cells in the tubuli of bovinedentin upon application of the adhesive. Further, the additive fully inhibited bacterial collagenase at a concentration of 0.5 wt% and reduced human recombinant collagenase MMP-9 to 13% of its original activity at that concentration. Human MMPs naturally bound to dentin were inhibited by more than 96% in a medium containing 5 wt% of the additive. Moreover, no adverse effect on the enamel/dentine shear bond strength was detected in combination with a dental composite. PMID:25130877

  4. Program Helps Decompose Complicated Design Problems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1993-01-01

    Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.

  5. Traits determining the digestibility-decomposability relationships in species from Mediterranean rangelands.

    PubMed

    Bumb, Iris; Garnier, Eric; Coq, Sylvain; Nahmani, Johanne; Del Rey Granado, Maria; Gimenez, Olivier; Kazakou, Elena

    2018-03-05

    Forage quality for herbivores and litter quality for decomposers are two key plant properties affecting ecosystem carbon and nutrient cycling. Although there is a positive relationship between palatability and decomposition, very few studies have focused on larger vertebrate herbivores while considering links between the digestibility of living leaves and stems and the decomposability of litter and associated traits. The hypothesis tested is that some defences of living organs would reduce their digestibility and, as a consequence, their litter decomposability, through 'afterlife' effects. Additionally in high-fertility conditions the presence of intense herbivory would select for communities dominated by fast-growing plants, which are able to compensate for tissue loss by herbivory, producing both highly digestible organs and easily decomposable litter. Relationships between dry matter digestibility and decomposability were quantified in 16 dominant species from Mediterranean rangelands, which are subject to management regimes that differ in grazing intensity and fertilization. The digestibility and decomposability of leaves and stems were estimated at peak standing biomass, in plots that were either fertilized and intensively grazed or unfertilized and moderately grazed. Several traits were measured on living and senesced organs: fibre content, dry matter content and nitrogen, phosphorus and tannin concentrations. Digestibility was positively related to decomposability, both properties being influenced in the same direction by management regime, organ and growth forms. Digestibility of leaves and stems was negatively related to their fibre concentrations, and positively related to their nitrogen concentration. Decomposability was more strongly related to traits measured on living organs than on litter. Digestibility and decomposition were governed by similar structural traits, in particular fibre concentration, affecting both herbivores and micro-organisms through the afterlife effects. This study contributes to a better understanding of the interspecific relationships between forage quality and litter decomposition in leaves and stems and demonstrates the key role these traits play in the link between plant and soil via herbivory and decomposition. Fibre concentration and dry matter content can be considered as good predictors of both digestibility and decomposability. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  7. Fully Decomposable Split Graphs

    NASA Astrophysics Data System (ADS)

    Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.

    We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.

  8. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  9. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1994-01-01

    DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.

  10. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  11. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  12. Changes in herbivore control in arable fields by detrital subsidies depend on predator species and vary in space.

    PubMed

    von Berg, Karsten; Thies, Carsten; Tscharntke, Teja; Scheu, Stefan

    2010-08-01

    Prey from the decomposer subsystem may help sustain predator populations in arable fields. Adding organic residues to agricultural systems may therefore enhance pest control. We investigated whether resource addition (maize mulch) strengthens aboveground trophic cascades in winter wheat fields. Evaluating the flux of the maize-borne carbon into the food web after 9 months via stable isotope analysis allowed differentiating between prey in predator diets originating from the above- and belowground subsystems. Furthermore, we recorded aphid populations in predator-reduced and control plots of no-mulch and mulch addition treatments. All analyzed soil dwelling species incorporated maize-borne carbon. In contrast, only 2 out of 13 aboveground predator species incorporated maize carbon, suggesting that these 2 predators forage on prey from the above- and belowground systems. Supporting this conclusion, densities of these two predator species were increased in the mulch addition fields. Nitrogen isotope signatures suggested that these generalist predators in part fed on Collembola thereby benefiting indirectly from detrital resources. Increased density of these two predator species was associated by increased aphid control but the identity of predators responsible for aphid control varied in space. One of the three wheat fields studied even lacked aphid control despite of mulch-mediated increased density of generalist predators. The results suggest that detrital subsidies quickly enter belowground food webs but only a few aboveground predator species include prey out of the decomposer system into their diet. Variation in the identity of predator species benefiting from detrital resources between sites suggest that, depending on locality, different predator species are subsidised by prey out of the decomposer system and that these predators contribute to aphid control. Therefore, by engineering the decomposer subsystem via detrital subsidies, biological control by generalist predators may be strengthened.

  13. Teaching Analytical Thinking

    ERIC Educational Resources Information Center

    Behn, Robert D.; Vaupel, James W.

    1976-01-01

    Description of the philosophy and general nature of a course at Drake University that emphasizes basic concepts of analytical thinking, including think, decompose, simplify, specify, and rethink problems. Some sample homework exercises are included. The journal is available from University of California Press, Berkeley, California 94720.…

  14. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  15. Autonomous Information Unit: Why Making Data Smart Can also Make Data Secured?

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.

    2006-01-01

    In this paper, we introduce a new fine-grain distributed information protection mechanism which can self-protect, self-discover, self-organize, and self-manage. In our approach, we decompose data into smaller pieces and provide individualized protection. We also provide a policy control mechanism to allow 'smart' access control and context based re-assembly of the decomposed data. By combining smart policy with individually protected data, we are able to provide better protection of sensitive information and achieve more flexible access during emergency conditions. As a result, this new fine-grain protection mechanism can enable us to achieve better solutions for problems such as distributed information protection and identity theft.

  16. Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert

    2002-01-01

    The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.

  17. Cascading effects of induced terrestrial plant defences on aquatic and terrestrial ecosystem function

    PubMed Central

    Jackrel, Sara L.; Wootton, J. Timothy

    2015-01-01

    Herbivores induce plants to undergo diverse processes that minimize costs to the plant, such as producing defences to deter herbivory or reallocating limited resources to inaccessible portions of the plant. Yet most plant tissue is consumed by decomposers, not herbivores, and these defensive processes aimed to deter herbivores may alter plant tissue even after detachment from the plant. All consumers value nutrients, but plants also require these nutrients for primary functions and defensive processes. We experimentally simulated herbivory with and without nutrient additions on red alder (Alnus rubra), which supplies the majority of leaf litter for many rivers in western North America. Simulated herbivory induced a defence response with cascading effects: terrestrial herbivores and aquatic decomposers fed less on leaves from stressed trees. This effect was context dependent: leaves from fertilized-only trees decomposed most rapidly while leaves from fertilized trees receiving the herbivory treatment decomposed least, suggesting plants funnelled a nutritionally valuable resource into enhanced defence. One component of the defence response was a decrease in leaf nitrogen leading to elevated carbon : nitrogen. Aquatic decomposers prefer leaves naturally low in C : N and this altered nutrient profile largely explains the lower rate of aquatic decomposition. Furthermore, terrestrial soil decomposers were unaffected by either treatment but did show a preference for local and nitrogen-rich leaves. Our study illustrates the ecological implications of terrestrial herbivory and these findings demonstrate that the effects of selection caused by terrestrial herbivory in one ecosystem can indirectly shape the structure of other ecosystems through ecological fluxes across boundaries. PMID:25788602

  18. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  19. Monopropellant combustion system

    NASA Technical Reports Server (NTRS)

    Berg, Gerald R. (Inventor); Mueller, Donn C. (Inventor); Parish, Mark W. (Inventor)

    2005-01-01

    An apparatus and method are provided for decomposition of a propellant. The propellant includes an ionic salt and an additional fuel. Means are provided for decomposing a major portion of the ionic salt. Means are provided for combusting the additional fuel and decomposition products of the ionic salt.

  20. The Processes Involved in Designing Software.

    DTIC Science & Technology

    1980-08-01

    repeats Itself at the next level, terminating with a plan whose individual steps can be executed to solve the Initial problem. Hayes-Roth and Hayes-Roth...that the original design problem is decomposed into a collection of well structured subproblems under the control of some type of executive process...given element to refine further, the schema is assumed to execute to completion, developing a solution model for that element and refining it into a

  1. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  2. Comparison study of image quality and effective dose in dual energy chest digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Lee, Haenghwa; Kim, Dohyeon; Choi, Seungyeon; Kim, Hee-Joung

    2018-07-01

    The present study aimed to introduce a recently developed digital tomosynthesis system for the chest and describe the procedure for acquiring dual energy bone decomposed tomosynthesis images. Various beam quality and reconstruction algorithms were evaluated for acquiring dual energy chest digital tomosynthesis (CDT) images and the effective dose was calculated with ion chamber and Monte Carlo simulations. The results demonstrated that dual energy CDT improved visualization of the lung field by eliminating the bony structures. In addition, qualitative and quantitative image quality of dual energy CDT using iterative reconstruction was better than that with filtered backprojection (FBP) algorithm. The contrast-to-noise ratio and figure of merit values of dual energy CDT acquired with iterative reconstruction were three times better than those acquired with FBP reconstruction. The difference in the image quality according to the acquisition conditions was not noticeable, but the effective dose was significantly affected by the acquisition condition. The high energy acquisition condition using 130 kVp recorded a relatively high effective dose. We conclude that dual energy CDT has the potential to compensate for major problems in CDT due to decomposed bony structures, which induce significant artifacts. Although there are many variables in the clinical practice, our results regarding reconstruction algorithms and acquisition conditions may be used as the basis for clinical use of dual energy CDT imaging.

  3. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  4. Artificial Epigenetic Networks: Automatic Decomposition of Dynamical Control Tasks Using Topological Self-Modification.

    PubMed

    Turner, Alexander P; Caves, Leo S D; Stepney, Susan; Tyrrell, Andy M; Lones, Michael A

    2017-01-01

    This paper describes the artificial epigenetic network, a recurrent connectionist architecture that is able to dynamically modify its topology in order to automatically decompose and solve dynamical problems. The approach is motivated by the behavior of gene regulatory networks, particularly the epigenetic process of chromatin remodeling that leads to topological change and which underlies the differentiation of cells within complex biological organisms. We expected this approach to be useful in situations where there is a need to switch between different dynamical behaviors, and do so in a sensitive and robust manner in the absence of a priori information about problem structure. This hypothesis was tested using a series of dynamical control tasks, each requiring solutions that could express different dynamical behaviors at different stages within the task. In each case, the addition of topological self-modification was shown to improve the performance and robustness of controllers. We believe this is due to the ability of topological changes to stabilize attractors, promoting stability within a dynamical regime while allowing rapid switching between different regimes. Post hoc analysis of the controllers also demonstrated how the partitioning of the networks could provide new insights into problem structure.

  5. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  6. Pauses and Intonational Phrasing: ERP Studies in 5-Month-Old German Infants and Adults

    ERIC Educational Resources Information Center

    Mannel, Claudia; Friederici, Angela D.

    2009-01-01

    In language learning, infants are faced with the challenge of decomposing continuous speech into relevant units, such as syntactic clauses and words. Within the framework of prosodic bootstrapping, behavioral studies suggest infants approach this segmentation problem by relying on prosodic information, especially on acoustically marked…

  7. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  8. Effect of addition of butyl benzyl phthalate plasticizer and zinc oxide nanoparticles on mechanical properties of cellulose acetate butyrate/organoclay biocomposite

    NASA Astrophysics Data System (ADS)

    Putra, B. A. P.; Juwono, A. L.; Rochman, N. T.

    2017-07-01

    Plastics as packaging materials and coatings undergo increasing demands globally each year. This pose a serious problem to the environment due to its difficulty to degrade. One solution to addressing the problem of plastic wastes is the use of bioplastics. According to the European Organization Bioplastic, one of the biodegradable plastics is derivative of cellulose. To improve mechanical properties of bioplastic, biocomposites are made with the addition of certain additives and fillers. The aim of this study is to investigate the effect of butyl benzyl phthalate plasticizer (BBP) and ZnO nanoparticles addition on mechanical properties of cellulose acetate butyrate (CAB) / organoclay biocomposite. ZnO nanoparticles synthesized from commercial ZnO precursor by using sol-gel size reduction method. ZnO was dissolved in a solution of citric acid in the ratio 1:1 to 1:5 to form zinc citrate. Zinc citrate then decomposed by calcination at temperature of 600oC. ZnO nanoparticles with an average size of 44.4 nm is obtained at a ratio of 1: 2. The addition of ZnO nanoparticles and BBP plasticizer was varied to determine the effect on the mechanical properties of biocomposite. The addition of 10 - 15 %wt ZnO nanoparticles and 30 - 40 %wt BBP plasticizer was studied to determine the effect on the tensile strength, elongation, and modulus elasticity of the biocomposites. Biocomposite films were made by using solution casting method with acetone as solvent. The addition of plasticizer BBP and ZnO nanoparticles by 30% and 10% made biocomposite has a tensile strength of 2.223 MPa.

  9. Generalization of Jacobi's Decomposition Theorem to the Rotation and Translation of a Solid in a Fluid.

    NASA Astrophysics Data System (ADS)

    Chiang, Rong-Chang

    Jacobi found that the rotation of a symmetrical heavy top about a fixed point is composed of the two torque -free rotations of two triaxial bodies about their centers of mass. His discovery rests on the fact that the orthogonal matrix which represents the rotation of a symmetrical heavy top is decomposed into a product of two orthogonal matrices, each of which represents the torque-free rotations of two triaxial bodies. This theorem is generalized to the Kirchhoff's case of the rotation and translation of a symmetrical solid in a fluid. This theorem requires the explicit computation, by means of theta functions, of the nine direction cosines between the rotating body axes and the fixed space axes. The addition theorem of theta functions makes it possible to decompose the rotational matrix into a product of similar matrices. This basic idea of utilizing the addition theorem is simple but the carry-through of the computation is quite involved and the full proof turns out to be a lengthy process of computing rather long and complex expressions. For the translational motion we give a new treatment. The position of the center of mass as a function of the time is found by a direct evaluation of the elliptic integral by means of a new theta interpretation of Legendre's reduction formula of the elliptic integral. For the complete solution of the problem we have added further the study of the physical aspects of the motion. Based on a complete examination of the all possible manifolds of the steady helical cases it is possible to obtain a full qualitative description of the motion. Many numerical examples and graphs are given to illustrate the rotation and translation of the solid in a fluid.

  10. Working Papers in Speech Recognition. IV. The Hearsay II System

    DTIC Science & Technology

    1976-02-01

    implementation of this model (Reddy, Erman, and Neely (73); Reddy, Er- man, Fennell , and Neely (73); Neely [73); Erman |74J). This system, which was the... Fennell . Erman, and Rea- dy (74|). Hearsayll is also based on the Hearsay model: it generalizes and extends many of the con- cepts which exist in a...difficulty of decomposing large problems for such machines. Erman, Fennell , Lesser, and Reddy [73] describe this problem and outline some early solutions

  11. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  12. Slow decomposition of lower order roots: a key mechanism of root carbon and nutrient retention in the soil.

    PubMed

    Fan, Pingping; Guo, Dali

    2010-06-01

    Among tree fine roots, the distal small-diameter lateral branches comprising first- and second-order roots lack secondary (wood) development. Therefore, these roots are expected to decompose more rapidly than higher order woody roots. But this prediction has not been tested and may not be correct. Current evidence suggests that lower order roots may decompose more slowly than higher order roots in tree species associated with ectomycorrhizal (EM) fungi because they are preferentially colonized by fungi and encased by a fungal sheath rich in chitin (a recalcitrant compound). In trees associated with arbuscular mycorrhizal (AM) fungi, lower order roots do not form fungal sheaths, but they may have poorer C quality, e.g. lower concentrations of soluble carbohydrates and higher concentrations of acid-insolubles than higher order roots, thus may decompose more slowly. In addition, litter with high concentrations of acid insolubles decomposes more slowly under higher N concentrations (such as lower order roots). Therefore, we propose that in both AM and EM trees, lower order roots decompose more slowly than higher order roots due to the combination of poor C quality and high N concentrations. To test this hypothesis, we examined decomposition of the first six root orders in Fraxinus mandshurica (an AM species) and Larix gmelinii (an EM species) using litterbag method in northeastern China. We found that lower order roots of both species decomposed more slowly than higher order roots, and this pattern appears to be associated mainly with initial C quality and N concentrations. Because these lower order roots have short life spans and thus dominate root mortality, their slow decomposition implies that a substantial fraction of the stable soil organic matter pool is derived from these lower order roots, at least in the two species we studied.

  13. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1995-01-01

    DeMAID (Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problems such as large platforms in outer space. Groups modular subsystems on basis of interactions among them. Saves considerable amount of money and time in total design process, particularly in new design problem in which order of modules has not been defined. Originally written for design problems, also applicable to problems containing modules (processes) that take inputs and generate outputs. Available in three machine versions: Macintosh written in Symantec's Think C 3.01, Sun, and SGI IRIS in C language.

  14. Investigating the Conceptual Variation of Major Physics Textbooks

    NASA Astrophysics Data System (ADS)

    Stewart, John; Campbell, Richard; Clanton, Jessica

    2008-04-01

    The conceptual problem content of the electricity and magnetism chapters of seven major physics textbooks was investigated. The textbooks presented a total of 1600 conceptual electricity and magnetism problems. The solution to each problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content among the set of topics common to the texts. The variation of the distribution of conceptual coverage within each text is studied. The variation between the major groupings of the textbooks (conceptual, algebra-based, and calculus-based) is also studied. A measure of the conceptual complexity of the problems in each text is presented.

  15. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  16. Particle agglomeration and fuel decomposition in burning slurry droplets

    NASA Astrophysics Data System (ADS)

    Choudhury, P. Roy; Gerstein, Melvin

    In a burning slurry droplet the particles tend to agglomerate and produce large clusters which are difficult to burn. As a consequence, the combustion efficiency is drastically reduced. For such a droplet the nonlinear D2- t behavior associated with the formation of hard to burn agglomerates can be explained if the fuel decomposes on the surface of the particles. This paper deals with analysis and experiments with JP-10 and Diesel #2 slurries prepared with inert SiC and Al 2O 3 particles. It provides direct evidence of decomposed fuel residue on the surface of the particles heated by flame radiation. These decomposed fuel residues act as bonding agents and appear to be responsible for the observed agglomeration of particles in a slurry. Chemical analysis, scanning electron microscope photographs and finally micro-analysis by electron scattering clearly show the presence of decomposed fuel residue on the surface of the particles. Diesel #2 is decomposed relatively easily and therefore leaves a thicker deposit on SiC and forms larger agglomerates than the more stable JP-10. A surface reaction model with particles heated by flame radiation is able to describe the observed trend of the diameter history of the slurry fuel. Additional experiments with particles of lower emissivity (Al 2O 3) and radiation absorbing dye validate the theoretical model of the role of flame radiation in fuel decomposition and the formation of agglomerates in burning slurry droplets.

  17. Reactions in trifluoroacetic acid (CF 3COOH) induced by low energy electron attachment

    NASA Astrophysics Data System (ADS)

    Langer, Judith; Stano, Michal; Gohlke, Sascha; Foltin, Victor; Matejcik, Stefan; Illenberger, Eugen

    2006-02-01

    Dissociative electron attachment to trifluoroacetic acid (CF 3COOH) is characterized by an intense low energy shape resonance located near 1 eV and a comparatively weaker core excited resonance located near 7 eV. The shape resonance decomposes into the fragment ions CF 3COO -, CF 2COO -, and CF2-. The underlying reactions include simple bond cleavage but also more complex sequences involving multiple bond cleavages, rearrangement in the precursor ion and formation of new molecules (HF, CO 2). The core excited resonance additionally decomposes into F -, CF3- and probably metastable CO2-.

  18. An algorithm of adaptive scale object tracking in occlusion

    NASA Astrophysics Data System (ADS)

    Zhao, Congmei

    2017-05-01

    Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.

  19. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  20. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  1. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  2. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  3. Long-term Priming-induced Changes in Permafrost Soil Organic Matter Decomposition

    NASA Astrophysics Data System (ADS)

    Pegoraro, E.; Bracho, R. G.; Schuur, E.

    2016-12-01

    Warming of tundra ecosystems due to climate change is predicted to thaw permafrost and increase plant biomass and litter input to soil. Additional input of easily decomposable carbon can stimulate microbial activity, consequently increasing soil organic matter decomposition rates. This phenomenon, known as the priming effect, can exacerbate the effects of climate change by releasing more CO2 from permafrost soils; however, the extent to which it could decrease soil carbon stocks in the Arctic is unknown. Most priming incubation studies are conducted for a short period of time, making it difficult to assess if priming is a short-term phenomenon, or could persist over the long-term. We incubated permafrost soil from a moist acidic tundra site in Healy, Alaska for 456 days at 15° C. Soil from surface and deep layers were amended with three pulses of uniformly 13C labeled glucose, a fast decomposing substrate, every 152 days. We also quantified the proportion of old carbon respired by measuring 14CO2. Substrate addition resulted in higher respiration rates in glucose amended soils; however, positive priming was only observed in deep layers, where on average 9%, 57%, and 25% more soil-derived C was respired at 45-55, 65-75, and 75-85 cm depth increments for the duration of the experiment. This suggests that microbes in deep layers are limited in energy, and the addition of easily decomposable carbon increases native soil organic matter decomposition.

  4. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  5. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  6. Decomposing of Socioeconomic Inequality in Mental Health: A Cross-Sectional Study into Female-Headed Households.

    PubMed

    Veisani, Yousef; Delpisheh, Ali

    2015-01-01

    Connection between socioeconomic statuses and mental health has been reported already. Accordingly, mental health asymmetrically is distributed in society; therefore, people with disadvantaged condition suffer from inconsistent burden of mental disorders. In this study, we aimed to understand the determinants of socioeconomic inequality of mental health in the female-headed households and decomposed contributions of socioeconomic determinants in mental health. In this cross-sectional study, 787 female-headed households were enrolled using systematic random sampling in 2014. Data were taken from the household assets survey and a self-administered 28 item General Health Questionnaire (GHQ-28) as a screening tool for detection of possible cases of mental disorders. Inequality was measured by concentration index (CI) and as decomposing contribution in inequality. All analyses were performed by standard statistical software Stata 11.2. The overall CI for mental health in the female-headed households was -0.049 (95% CI: -0.072, 0.025). The highly positive contributors for inequality in mental health in the female-headed households were age (34%) and poor household economic status (22%). Socioeconomic inequalities exist in mental health into female-headed households and mental health problems more prevalent in women with lower socioeconomic status.

  7. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  8. Decomposing Slavic Aspect: The Role of Aspectual Morphology in Polish and Other Slavic Languages

    ERIC Educational Resources Information Center

    Lazorczyk, Agnieszka Agata

    2010-01-01

    This dissertation considers the problem of the semantic function of verbal aspectual morphology in Polish and other Slavic languages in the framework of generative syntax and semantics. Three kinds of such morphology are examined: (i) prefixes attaching directly to the root, (ii) "secondary imperfective" suffixes, and (iii) three prefixes that…

  9. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  10. The Second Conference on the Environmental Chemistry of Hydrazine Fuels; 15 February 1979.

    DTIC Science & Technology

    1982-04-01

    tank by a moving piston in the tank. The hydrazine trave’s to a gas generator where it decomposes on an iridium /alumina catalyst. The gas is used to...possibility of nitrogen trichloride formation and presented control instrument problems since commercially available instru- ments required p11 of about 5

  11. Construct DTPB Model by Using DEMATEL: A Study of a University Library Website

    ERIC Educational Resources Information Center

    Lee, Yu-Cheng; Hsieh, Yi-Fang; Guo, Yau-Bin

    2013-01-01

    Purpose: Traditional studies on a decomposed theory of planned behavior (DTPB) analyze the relationship of variables through a structural equation model. If certain variables do not fully comply with the independent hypothesis, it is not possible to conduct proper analysis, which leads to false conclusions. To solve these problems, the aim of this…

  12. Improvements in Operational Readiness by Distributing Manufacturing Capability in the Supply Chain through Additive Manufacturing

    DTIC Science & Technology

    2017-12-01

    inefficiencies of a more complex system. Additional time may also be due to the longer distances traveled . The fulfillment time for a requisition to...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...advanced manufacturing methods with additive manufacturing. This work decomposes the additive manufacturing processes into 11 primary functions. The time

  13. Phenotypic responses to microbial volatiles render a mold fungus more susceptible to insect damage.

    PubMed

    Caballero Ortiz, Silvia; Trienens, Monika; Pfohl, Katharina; Karlovsky, Petr; Holighaus, Gerrit; Rohlfs, Marko

    2018-04-01

    In decomposer systems, fungi show diverse phenotypic responses to volatile organic compounds of microbial origin (volatiles). The mechanisms underlying such responses and their consequences for the performance and ecological success of fungi in a multitrophic community context have rarely been tested explicitly. We used a laboratory-based approach in which we investigated a tripartite yeast-mold-insect model decomposer system to understand the possible influence of yeast-borne volatiles on the ability of a chemically defended mold fungus to resist insect damage. The volatile-exposed mold phenotype (1) did not exhibit protein kinase A-dependent morphological differentiation, (2) was more susceptible to insect foraging activity, and (3) had reduced insecticidal properties. Additionally, the volatile-exposed phenotype was strongly impaired in secondary metabolite formation and unable to activate "chemical defense" genes upon insect damage. These results suggest that volatiles can be ecologically important factors that affect the chemical-based combative abilities of fungi against insect antagonists and, consequently, the structure and dynamics of decomposer communities.

  14. Large photorefractive effect in a thermally decomposed polymer compared with that in molecularly doped systems

    NASA Astrophysics Data System (ADS)

    Yokoyama, Kenji; Arishima, Koichi; Sukegawa, Ken

    1994-07-01

    Photorefractive polymers with the same electro-optic effect were fabricated to investigate the photorefractive effects in different photoconductive systems. The photoconduction in the polymers was varied by the addition of squarylium dye to diethylaminobenzaldehyde-diphenylhydrazone (DEH), by the formation of a charge-transfer complex between tetracyanoquinodimethane and DEH, and by the thermal decomposition of DEH. The largest photorefractive effect was observed in the thermally decomposed polymer among these polymers. A diffraction efficiency of 1.1% and a beam-coupling gain coefficient of 10 cm-1 were achieved in a 34.9 V/μm dc electric field.

  15. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  16. Influence of Litter Diversity on Dissolved Organic Matter Release and Soil Carbon Formation in a Mixed Beech Forest

    PubMed Central

    Scheibe, Andrea; Gleixner, Gerd

    2014-01-01

    We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow. PMID:25486628

  17. Influence of litter diversity on dissolved organic matter release and soil carbon formation in a mixed beech forest.

    PubMed

    Scheibe, Andrea; Gleixner, Gerd

    2014-01-01

    We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  19. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  20. A Review of the Bayesian Occupancy Filter

    PubMed Central

    Saval-Calvo, Marcelo; Medina-Valdés, Luis; Castillo-Secilla, José María; Cuenca-Asensi, Sergio; Martínez-Álvarez, Antonio; Villagrá, Jorge

    2017-01-01

    Autonomous vehicle systems are currently the object of intense research within scientific and industrial communities; however, many problems remain to be solved. One of the most critical aspects addressed in both autonomous driving and robotics is environment perception, since it consists of the ability to understand the surroundings of the vehicle to estimate risks and make decisions on future movements. In recent years, the Bayesian Occupancy Filter (BOF) method has been developed to evaluate occupancy by tessellation of the environment. A review of the BOF and its variants is presented in this paper. Moreover, we propose a detailed taxonomy where the BOF is decomposed into five progressive layers, from the level closest to the sensor to the highest abstract level of risk assessment. In addition, we present a study of implemented use cases to provide a practical understanding on the main uses of the BOF and its taxonomy. PMID:28208638

  1. Weighted least squares phase unwrapping based on the wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  2. Correlated Noise: How it Breaks NMF, and What to Do About It.

    PubMed

    Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D

    2011-01-12

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.

  3. Correlated Noise: How it Breaks NMF, and What to Do About It

    PubMed Central

    Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.

    2010-01-01

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288

  4. Finite element analysis of periodic transonic flow problems

    NASA Technical Reports Server (NTRS)

    Fix, G. J.

    1978-01-01

    Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.

  5. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  6. Molybdenum-based additives to mixed-metal oxides for use in hot gas cleanup sorbents for the catalytic decomposition of ammonia in coal gases

    DOEpatents

    Ayala, Raul E.

    1993-01-01

    This invention relates to additives to mixed-metal oxides that act simultaneously as sorbents and catalysts in cleanup systems for hot coal gases. Such additives of this type, generally, act as a sorbent to remove sulfur from the coal gases while substantially simultaneously, catalytically decomposing appreciable amounts of ammonia from the coal gases.

  7. CO2 enrichment and N addition increase nutrient loss from decomposing leaf litter in subtropical model forest ecosystems

    PubMed Central

    Liu, Juxiu; Fang, Xiong; Deng, Qi; Han, Tianfeng; Huang, Wenjuan; Li, Yiyong

    2015-01-01

    As atmospheric CO2 concentration increases, many experiments have been carried out to study effects of CO2 enrichment on litter decomposition and nutrient release. However, the result is still uncertain. Meanwhile, the impact of CO2 enrichment on nutrients other than N and P are far less studied. Using open-top chambers, we examined effects of elevated CO2 and N addition on leaf litter decomposition and nutrient release in subtropical model forest ecosystems. We found that both elevated CO2 and N addition increased nutrient (C, N, P, K, Ca, Mg and Zn) loss from the decomposing litter. The N, P, Ca and Zn loss was more than tripled in the chambers exposed to both elevated CO2 and N addition than those in the control chambers after 21 months of treatment. The stimulation of nutrient loss under elevated CO2 was associated with the increased soil moisture, the higher leaf litter quality and the greater soil acidity. Accelerated nutrient release under N addition was related to the higher leaf litter quality, the increased soil microbial biomass and the greater soil acidity. Our results imply that elevated CO2 and N addition will increase nutrient cycling in subtropical China under the future global change. PMID:25608664

  8. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  9. Improving engineering system design by formal decomposition, sensitivity analysis, and optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, J.; Barthelemy, J. F. M.

    1985-01-01

    A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.

  10. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  11. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  12. Requirements Analysis and Modeling with Problem Frames and SysML: A Case Study

    NASA Astrophysics Data System (ADS)

    Colombo, Pietro; Khendek, Ferhat; Lavazza, Luigi

    Requirements analysis based on Problem Frames is getting an increasing attention in the academic community and has the potential to become of relevant interest also for industry. However the approach lacks an adequate notational support and methodological guidelines, and case studies that demonstrate its applicability to problems of realistic complexity are still rare. These weaknesses may hinder its adoption. This paper aims at contributing towards the elimination of these weaknesses. We report on an experience in analyzing and specifying the requirements of a controller for traffic lights of an intersection using Problem Frames in combination with SysML. The analysis was performed by decomposing the problem, addressing the identified sub-problems, and recomposing them while solving the identified interferences. The experience allowed us to identify certain guidelines for decomposition and re-composition patterns.

  13. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  14. Effects of Added Organic Matter and Water on Soil Carbon Sequestration in an Arid Region

    PubMed Central

    Tian, Yuan; Jiang, Lianhe; Zhao, Xuechun; Zhu, Linhai; Chen, Xi; Gao, Yong; Wang, Shaoming; Zheng, Yuanrun; Rimmington, Glyn M.

    2013-01-01

    It is generally predicted that global warming will stimulate primary production and lead to more carbon (C) inputs to soil. However, many studies have found that soil C does not necessarily increase with increased plant litter input. Precipitation has increased in arid central Asia, and is predicted to increase more, so we tested the effects of adding fresh organic matter (FOM) and water on soil C sequestration in an arid region in northwest China. The results suggested that added FOM quickly decomposed and had minor effects on the soil organic carbon (SOC) pool to a depth of 30 cm. Both FOM and water addition had significant effects on the soil microbial biomass. The soil microbial biomass increased with added FOM, reached a maximum, and then declined as the FOM decomposed. The FOM had a more significant stimulating effect on microbial biomass with water addition. Under the soil moisture ranges used in this experiment (21.0%–29.7%), FOM input was more important than water addition in the soil C mineralization process. We concluded that short-term FOM input into the belowground soil and water addition do not affect the SOC pool in shrubland in an arid region. PMID:23875022

  15. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks

    PubMed Central

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-01-01

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches. PMID:26690162

  16. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks.

    PubMed

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-12-04

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.

  17. Fungal decomposers of leaf litter from an invaded and native mountain forest of NW Argentina.

    PubMed

    Fernandez, Romina Daiana; Bulacio, Natalia; Álvarez, Analía; Pajot, Hipólito; Aragón, Roxana

    2017-09-01

    The impact of plant species invasions on the abundance, composition and activity of fungal decomposers of leaf litter is poorly understood. In this study, we isolated and compared the relative abundance of ligninocellulolytic fungi of leaf litter mixtures from a native forest and a forest invaded by Ligustrum lucidum in a lower mountain forest of Tucuman, Argentina. In addition, we evaluated the relationship between the relative abundance of ligninocellulolytic fungi and properties of the soil of both forest types. Finally, we identified lignin degrading fungi and characterized their polyphenol oxidase activities. The relative abundance of ligninocellulolytic fungi was higher in leaf litter mixtures from the native forest. The abundance of cellulolytic fungi was negatively related with soil pH while the abundance of ligninolytic fungi was positively related with soil humidity. We identified fifteen genera of ligninolytic fungi; four strains were isolated from both forest types, six strains only from the invaded forest and five strains were isolated only from the native forest. The results found in this study suggest that L. Lucidum invasion could alter the abundance and composition of fungal decomposers. Long-term studies that include an analysis of the nutritional quality of litter are needed, for a more complete overview of the influence of L. Lucidum invasion on fungal decomposers and on leaf litter decomposition.

  18. Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers.

    PubMed

    Kobayashi, Masaki

    2018-04-01

    A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.

  19. Application of supercritical water to decompose brominated epoxy resin and environmental friendly recovery of metals from waste memory module.

    PubMed

    Li, Kuo; Xu, Zhenming

    2015-02-03

    Waste Memory Modules (WMMs), a particular kind of waste printed circuit board (WPCB), contain a high amount of brominated epoxy resin (BER), which may bring a series of environmental and health problems. On the other hand, metals like gold and copper are very valuable and are important to recover from WMMs. In the present study, an effective and environmental friendly method using supercritical water (SCW) to decompose BER and recover metals from WMMs was developed instead of hydrometallurgy or pyrometallurgy simultaneously. Experiments were conducted under external-catalyst-free conditions with temperatures ranging from 350 to 550 °C, pressures from 25 to 40 MPa, and reaction times from 120 to 360 min in a semibatch-type reactor. The results showed that BER could be quickly and efficiently decomposed under SCW condition, and the mechanism was possibly free radical reaction. After the SCW treatments, the glass fibers and metal foils in the solid residue could be easily liberated and recovered, respectively. The metal recovery rate reached 99.80%. The optimal parameters were determined as 495 °C, 33 MPa, and 305 min on the basis of response surface methodology (RSM). This study provides an efficient and environmental friendly approach for WMMs recycling compared with electrolysis, pyrometallurgy, and hydrometallurgy.

  20. Grain refinement of a nickel and manganese free austenitic stainless steel produced by pressurized solution nitriding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadzadeh, Roghayeh, E-mail: r_mohammadzadeh@sut.ac.ir; Akbari, Alireza, E-mail: akbari@sut.ac.ir

    2014-07-01

    Prolonged exposure at high temperatures during solution nitriding induces grain coarsening which deteriorates the mechanical properties of high nitrogen austenitic stainless steels. In this study, grain refinement of nickel and manganese free Fe–22.75Cr–2.42Mo–1.17N high nitrogen austenitic stainless steel plates was investigated via a two-stage heat treatment procedure. Initially, the coarse-grained austenitic stainless steel samples were subjected to an isothermal heating at 700 °C to be decomposed into the ferrite + Cr{sub 2}N eutectoid structure and then re-austenitized at 1200 °C followed by water quenching. Microstructure and hardness of samples were characterized using X-ray diffraction, optical and scanning electron microscopy, andmore » micro-hardness testing. The results showed that the as-solution-nitrided steel decomposes non-uniformly to the colonies of ferrite and Cr{sub 2}N nitrides with strip like morphology after isothermal heat treatment at 700 °C. Additionally, the complete dissolution of the Cr{sub 2}N precipitates located in the sample edges during re-austenitizing requires longer times than 1 h. In order to avoid this problem an intermediate nitrogen homogenizing heat treatment cycle at 1200 °C for 10 h was applied before grain refinement process. As a result, the initial austenite was uniformly decomposed during the first stage, and a fine grained austenitic structure with average grain size of about 20 μm was successfully obtained by re-austenitizing for 10 min. - Highlights: • Successful grain refinement of Fe–22.75Cr–2.42Mo–1.17N steel by heat treatment • Using the γ → α + Cr{sub 2}N reaction for grain refinement of a Ni and Mn free HNASS • Obtaining a single phase austenitic structure with average grain size of ∼ 20 μm • Incomplete dissolution of Cr{sub 2}N during re-austenitizing at 1200 °C for long times • Reducing re-austenitizing time by homogenizing treatment before grain refinement.« less

  1. Performance optimization of the power user electric energy data acquire system based on MOEA/D evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong

    2017-10-01

    The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.

  2. Distributed Task Offloading in Heterogeneous Vehicular Crowd Sensing

    PubMed Central

    Liu, Yazhi; Wang, Wendong; Ma, Yuekun; Yang, Zhigang; Yu, Fuxing

    2016-01-01

    The ability of road vehicles to efficiently execute different sensing tasks varies because of the heterogeneity in their sensing ability and trajectories. Therefore, the data collection sensing task, which requires tempo-spatial sensing data, becomes a serious problem in vehicular sensing systems, particularly those with limited sensing capabilities. A utility-based sensing task decomposition and offloading algorithm is proposed in this paper. The utility function for a task executed by a certain vehicle is built according to the mobility traces and sensing interfaces of the vehicle, as well as the sensing data type and tempo-spatial coverage requirements of the sensing task. Then, the sensing tasks are decomposed and offloaded to neighboring vehicles according to the utilities of the neighboring vehicles to the decomposed sensing tasks. Real trace-driven simulation shows that the proposed task offloading is able to collect much more comprehensive and uniformly distributed sensing data than other algorithms. PMID:27428967

  3. American option pricing in Gauss-Markov interest rate models

    NASA Astrophysics Data System (ADS)

    Galluccio, Stefano

    1999-07-01

    In the context of Gaussian non-homogeneous interest-rate models, we study the problem of American bond option pricing. In particular, we show how to efficiently compute the exercise boundary in these models in order to decompose the price as a sum of a European option and an American premium. Generalizations to coupon-bearing bonds and jump-diffusion processes for the interest rates are also discussed.

  4. MAUD: An Interactive Computer Program for the Structuring, Decomposition, and Recomposition of Preferences between Multiattributed Alternatives. Final Report. Technical Report 543.

    ERIC Educational Resources Information Center

    Humphreys, Patrick; Wisudha, Ayleen

    As a demonstration of the application of heuristic devices to decision-theoretical techniques, an interactive computer program known as MAUD (Multiattribute Utility Decomposition) has been designed to support decision or choice problems that can be decomposed into component factors, or to act as a tool for investigating the microstructure of a…

  5. Plant Diversity Impacts Decomposition and Herbivory via Changes in Aboveground Arthropods

    PubMed Central

    Ebeling, Anne; Meyer, Sebastian T.; Abbas, Maike; Eisenhauer, Nico; Hillebrand, Helmut; Lange, Markus; Scherber, Christoph; Vogel, Anja; Weigelt, Alexandra; Weisser, Wolfgang W.

    2014-01-01

    Loss of plant diversity influences essential ecosystem processes as aboveground productivity, and can have cascading effects on the arthropod communities in adjacent trophic levels. However, few studies have examined how those changes in arthropod communities can have additional impacts on ecosystem processes caused by them (e.g. pollination, bioturbation, predation, decomposition, herbivory). Therefore, including arthropod effects in predictions of the impact of plant diversity loss on such ecosystem processes is an important but little studied piece of information. In a grassland biodiversity experiment, we addressed this gap by assessing aboveground decomposer and herbivore communities and linking their abundance and diversity to rates of decomposition and herbivory. Path analyses showed that increasing plant diversity led to higher abundance and diversity of decomposing arthropods through higher plant biomass. Higher species richness of decomposers, in turn, enhanced decomposition. Similarly, species-rich plant communities hosted a higher abundance and diversity of herbivores through elevated plant biomass and C:N ratio, leading to higher herbivory rates. Integrating trophic interactions into the study of biodiversity effects is required to understand the multiple pathways by which biodiversity affects ecosystem functioning. PMID:25226237

  6. Catalytic ignition of ionic liquids for propellant applications.

    PubMed

    Shamshina, Julia L; Smiglak, Marcin; Drab, David M; Parker, T Gannon; Dykes, H Waite H; Di Salvo, Roberto; Reich, Alton J; Rogers, Robin D

    2010-12-21

    In this proof of concept study, the ionic liquids, 2-hydroxyethylhydrazinium nitrate and 2-hydroxyethylhydrazinium dinitrate, ignited on contact with preheated Shell 405 (iridium supported on alumina) catalyst and energetically decomposed with no additional ignition source, suggesting a possible route to hydrazine replacements.

  7. An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification

    PubMed Central

    Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos

    2015-01-01

    This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015

  8. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  9. Cognitive mechanisms of insight: the role of heuristics and representational change in solving the eight-coin problem.

    PubMed

    Öllinger, Michael; Jones, Gary; Faber, Amory H; Knoblich, Günther

    2013-05-01

    The 8-coin insight problem requires the problem solver to move 2 coins so that each coin touches exactly 3 others. Ormerod, MacGregor, and Chronicle (2002) explained differences in task performance across different versions of the 8-coin problem using the availability of particular moves in a 2-dimensional search space. We explored 2 further explanations by developing 6 new versions of the 8-coin problem in order to investigate the influence of grouping and self-imposed constraints on solutions. The results identified 2 sources of problem difficulty: first, the necessity to overcome the constraint that a solution can be found in 2-dimensional space and, second, the necessity to decompose perceptual groupings. A detailed move analysis suggested that the selection of moves was driven by the established representation rather than the application of the appropriate heuristics. Both results support the assumptions of representational change theory (Ohlsson, 1992).

  10. Propellant Charge with Reduced Muzzle Smoke and Flash Characteristics.

    DTIC Science & Technology

    a conventional double base extruded propellant as well as more energetic nitramine composition and a microencapsulated oxamide coolant additive for...cooling the gases exiting the weapons barrel. In the preferred embodiment, the oxamide is encapsulated with a gelatin and the resulting microcapsules ...of this invention to provide a novel microencapsulated propellant additive which will pass through the propellant flame zone intact and decompose

  11. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  12. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  13. Model reduction for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Williams, Trevor

    1992-01-01

    Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.

  14. A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects

    PubMed Central

    VanderWeele, Tyler J.

    2013-01-01

    Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283

  15. Decomposition pathways of polytetrafluoroethylene by co-grinding with strontium/calcium oxides.

    PubMed

    Qu, Jun; He, Xiaoman; Zhang, Qiwu; Liu, Xinzhong; Saito, Fumio

    2017-06-01

    Waste polytetrafluoroethylene (PTFE) could be easily decomposed by co-grinding with inorganic additive such as strontium oxide (SrO), strontium peroxide (SrO 2 ) and calcium oxide (CaO) by using a planetary ball mill, in which the fluorine was transformed into nontoxic inorganic fluoride salts such as strontium fluoride (SrF 2 ) or calcium fluoride (CaF 2 ). Depending on the kind of additive as well as the added molar ratio, however, the reaction mechanism of the decomposition was found to change, with different compositions of carbon compounds formed. CO gas, the mixture of strontium carbonate (SrCO 3 ) and carbon, only SrCO 3 were obtained as reaction products respectively with equimolar SrO, excess SrO and excess SrO 2 to the monomer unit CF 2 of PTFE were used. Excess amount of CaO was needed to effectively decompose PTFE because of its lower reactivity compared with strontium oxide, but it promised practical applications due to its low cost.

  16. A Greener Arctic: Vascular Plant Litter Input in Subarctic Peat Bogs Changes Soil Invertebrate Diets and Decomposition Patterns

    NASA Astrophysics Data System (ADS)

    Krab, E. J.; Berg, M. P.; Aerts, R.; van Logtestijn, R. S. P.; Cornelissen, H. H. C.

    2014-12-01

    Climate-change-induced trends towards shrub dominance in subarctic, moss-dominated peatlands will most likely have large effects on soil carbon (C) dynamics through an input of more easily decomposable litter. The mechanisms by which this increase in vascular litter input interacts with the abundance and diet-choice of the decomposer community to alter C-processing have, however, not yet been unraveled. We used a novel 13C tracer approach to link invertebrate species composition (Collembola), abundance and species-specific feeding behavior to C-processing of vascular and peat moss litters. We incubated different litter mixtures, 100% Sphagnum moss litter, 100% Betula leaf litter, and a 50/50 mixture of both, in mesocosms for 406 days. We revealed the transfer of C from the litters to the soil invertebrate species by 13C labeling of each of the litter types and assessed 13C signatures of the invertebrates Collembola species composition differed significantly between Sphagnum and Betula litter. Within the 'single type litter' mesocosms, Collembola species showed different 13C signatures, implying species-specific differences in diet choice. Surprisingly, the species composition and Collembola abundance changed relatively little as a consequence of Betula input to a Sphagnum based system. Their diet choice, however, changed drastically; species-specific differences in diet choice disappeared and approximately 67% of the food ingested by all Collembola originated from Betula litter. Furthermore, litter decomposition patterns corresponded to these findings; mass loss of Betula increased from 16.1% to 26.2% when decomposing in combination with Sphagnum, while Sphagnum decomposed even slower in combination with Betula litter (1.9%) than alone (4.7%). This study is the first to empirically show that collective diet shifts of the peatland decomposer community from mosses towards vascular plant litter may drive altered decomposition patterns. In addition, we showed that although species-specific differences in Collembola feeding behavior appear to exist, species are very plastic in their diet. This implies that changes in C turnover rates with vegetation shifts, might well be due to diet shifts of the present decomposer community rather than by changes in species composition.

  17. Fluid-mechanic/thermal interaction of a molten material and a decomposing solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, D.W.; Lee, D.O.

    1976-12-01

    Bench-scale experiments of a molten material in contact with a decomposing solid were conducted to gain insight into the expected interaction of a hot, molten reactor core with a concrete base. The results indicate that either of two regimes can occur: violent agitation and splattering of the melt or a very quiescent settling of the melt when placed in contact with the solid. The two regimes appear to be governed by the interface temperature condition. A conduction heat transfer model predicts the critical interface temperature with reasonable accuracy. In addition, a film thermal resistance model correlates well with the datamore » in predicting the time for a solid skin to form on the molten material.« less

  18. Relativistic Causality and Quasi-Orthomodular Algebras

    NASA Astrophysics Data System (ADS)

    Nobili, Renato

    2006-05-01

    The concept of fractionability or decomposability in parts of a physical system has its mathematical counterpart in the lattice--theoretic concept of orthomodularity. Systems with a finite number of degrees of freedom can be decomposed in different ways, corresponding to different groupings of the degrees of freedom. The orthomodular structure of these simple systems is trivially manifest. The problem then arises as to whether the same property is shared by physical systems with an infinite number of degrees of freedom, in particular by the quantum relativistic ones. The latter case was approached several years ago by Haag and Schroer (1962; Haag, 1992) who started from noting that the causally complete sets of Minkowski spacetime form an orthomodular lattice and posed the question of whether the subalgebras of local observables, with topological supports on such subsets, form themselves a corresponding orthomodular lattice. Were it so, the way would be paved to interpreting spacetime as an intrinsic property of a local quantum field algebra. Surprisingly enough, however, the hoped property does not hold for local algebras of free fields with superselection rules. The possibility seems to be instead open if the local currents that govern the superselection rules are driven by gauge fields. Thus, in the framework of local quantum physics, the request for algebraic orthomodularity seems to imply physical interactions! Despite its charm, however, such a request appears plagued by ambiguities and criticities that make of it an ill--posed problem. The proposers themselves, indeed, concluded that the orthomodular correspondence hypothesis is too strong for having a chance of being practicable. Thus, neither the idea was taken seriously by the proposers nor further investigated by others up to a reasonable degree of clarification. This paper is an attempt to re--formulate and well--pose the problem. It will be shown that the idea is viable provided that the algebra of local observables: (1) is considered all over the whole range of its irreducible representations; (2) is widened with the addition of the elements of a suitable intertwining group of automorphisms; (3) the orthomodular correspondence requirement is modified to an extent sufficient to impart a natural topological structure to the intertwined algebra of observables so obtained. A novel scenario then emerges in which local quantum physics appears to provide a general framework for non--perturbative quantum field dynamics.

  19. Litter type affects the activity of aerobic decomposers in a boreal peatland more than site nutrient and water level regimes

    NASA Astrophysics Data System (ADS)

    Straková, P.; Niemi, R. M.; Freeman, C.; Peltoniemi, K.; Toberman, H.; Heiskanen, I.; Fritze, H.; Laiho, R.

    2011-02-01

    Peatlands are carbon (C) storage ecosystems sustained by a high water level (WL). High WL creates anoxic conditions that suppress the activity of aerobic decomposers and provide conditions for peat accumulation. Peatland function can be dramatically affected by WL drawdown caused by land-use and/or climate change. Aerobic decomposers are directly affected by WL drawdown through environmental factors such as increased oxygenation and nutrient availability. Additionally, they are indirectly affected via changes in plant community composition and litter quality. We studied the relative importance of direct and indirect effects of WL drawdown on aerobic decomposer activity in plant litter. We did this by profiling 11 extracellular enzymes involved in the mineralization of organic C, nitrogen, phosphorus and sulphur. Our study sites represented a three-stage chronosequence from pristine (undrained) to short-term (years) and long-term (decades) WL drawdown conditions under two nutrient regimes. The litter types included reflected the prevalent vegetation, i.e., Sphagnum mosses, graminoids, shrubs and trees. WL drawdown had a direct and positive effect on microbial activity. Enzyme allocation shifted towards C acquisition, which caused an increase in the rate of decomposition. However, litter type overruled the direct effects of WL drawdown and was the main factor shaping microbial activity patterns. Our results imply that changes in plant community composition in response to persistent WL drawdown will strongly affect the C dynamics of peatlands.

  20. Group Decision Support System to Aid the Process of Design and Maintenance of Large Scale Systems

    DTIC Science & Technology

    1992-03-23

    from a fuzzy set of user requirements. The overall objective of the project is to develop a system combining the characteristics of a compact computer... AHP ) for hierarchical prioritization. 4) Individual Evaluation and Selection of Alternatives - Allows the decision maker to individually evaluate...its concept of outranking relations. The AHP method supports complex decision problems by successively decomposing and synthesizing various elements

  1. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  2. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-01-01

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  3. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-02-02

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  4. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  5. MO-FG-204-06: A New Algorithm for Gold Nano-Particle Concentration Identification in Dual Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Shen, C; Ng, M

    Purpose: Gold nano-particle (GNP) has recently attracted a lot of attentions due to its potential as an imaging contrast agent and radiotherapy sensitiser. Imaging the GNP at its low contraction is a challenging problem. We propose a new algorithm to improve the identification of GNP based on dual energy CT (DECT). Methods: We consider three base materials: water, bone, and gold. Determining three density images from two images in DECT is an under-determined problem. We propose to solve this problem by exploring image domain sparsity via an optimization approach. The objective function contains four terms. A data-fidelity term ensures themore » fidelity between the identified material densities and the DECT images, while the other three terms enforces the sparsity in the gradient domain of the three images corresponding to the density of the base materials by using total variation (TV) regularization. A primal-dual algorithm is applied to solve the proposed optimization problem. We have performed simulation studies to test this model. Results: Our digital phantom in the tests contains water, bone regions and gold inserts of different sizes and densities. The gold inserts contain mixed material consisting of water with 1g/cm3 and gold at a certain density. At a low gold density of 0.0008 g/cm3, the insert is hardly visible in DECT images, especially for those with small sizes. Our algorithm is able to decompose the DECT into three density images. Those gold inserts at a low density can be clearly visualized in the density image. Conclusion: We have developed a new algorithm to decompose DECT images into three different material density images, in particular, to retrieve density of gold. Numerical studies showed promising results.« less

  6. Proposals for Solutions to Problems Related to the Use of F-34 (SFP) and High Sulphur Diesel on Ground Equipment Using Advanced Reduction Emission Technologies (Propositions de solutions aux problemes lies a l’utilisation de F-34 (SFP) et de diesel a haute teneur en soufre pour le materiel terrestre disposant de technologies avancees de reduction des emissions)

    DTIC Science & Technology

    2008-09-01

    In a two - stage process the urea decomposes to ammonia (NH3) which then reacts with the nitrogen oxides (NOx) and leads to formation of nitrogen and...Sulphur Fuel (HSF) is a potential problem to NATO forces when vehicles and equipment are fitted with advanced emission reduction devices that require Low...worldwide available, standard fuel (F-34) and equipment capable of using such high sulphur fuels (HSF). Recommendations • Future equipment fitted with

  7. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  8. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  9. Removal of methylmercury and tributyltin (TBT) using marine microorganisms.

    PubMed

    Lee, Seong Eon; Chung, Jin Wook; Won, Ho Shik; Lee, Dong Sup; Lee, Yong-Woo

    2012-02-01

    Two marine species of bacteria were isolated that are capable of degrading organometallic contaminants: Pseudomonas balearica, which decomposes methylmercury; and Shewanella putrefaciens, which decomposes tributyltin. P. balearica decomposed 97% of methylmercury (20.0 μg/L) into inorganic mercury after 3 h, while S. putrefaciens decomposed 88% of tributyltin (55.3 μg Sn/L) in real wastewater after 36 h. These data indicate that the two bacteria efficiently decomposed the targeted substances and may be applied to real wastewater.

  10. A methodology to find the elementary landscape decomposition of combinatorial optimization problems.

    PubMed

    Chicano, Francisco; Whitley, L Darrell; Alba, Enrique

    2011-01-01

    A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.

  11. Addition of biochar to simulated golf greens promotes creeping bentgrass growth

    USDA-ARS?s Scientific Manuscript database

    Organic amendments such as peat moss and various composts are typically added to sand-based root zones such as golf greens to increase water and nutrient retention. However, these attributes are generally lost as these amendments decompose in a few years. Biochar is a high carbon, extremely porous ...

  12. Decomposed bodies--still an unrewarding autopsy?

    PubMed

    Ambade, Vipul Namdeorao; Keoliya, Ajay Narmadaprasad; Deokar, Ravindra Baliram; Dixit, Pradip Gangadhar

    2011-04-01

    One of the classic mistakes in forensic pathology is to regard the autopsy of decomposed body as unrewarding. The present study was undertaken with a view to debunk this myth and to determine the characteristic pattern in decomposed bodies brought for medicolegal autopsy. From a total of 4997 medicolegal deaths reported at an Apex Medical Centre, Yeotmal, a rural district of Maharashtra over seven year study period, only 180 cases were decomposed, representing 3.6% of the total medicolegal autopsies with the rate of 1.5 decomposed body/100,000 population per year. Male (79.4%) predominance was seen in decomposed bodies with male female ratio of 3.9:1. Most of the victims were between the ages of 31 and 60 years with peak at 31-40 years (26.7%) followed by 41-50 years (19.4%). Older age above 60 years was found in 8.6% cases. Married (64.4%) outnumbered unmarried ones in decomposition. Most of the decomposed bodies were complete (83.9%) and identified (75%). But when the body was incomplete/mutilated or skeletonised then 57.7% of the deceased remains unidentified. The cause and manner of death was ascertained in 85.6% and 81.1% cases respectively. Drowning (35.6%) was the commonest cause of death in decomposed bodies with suicide (52.8%) as the commonest manner of death. Decomposed bodies were commonly recovered from open places (43.9%), followed by water sources (43.3%) and enclosed place (12.2%). Most of the decomposed bodies were retrieved from well (49 cases) followed by barren land (27 cases) and forest (17 cases). 83.8% of the decomposed bodies were recovered before 72 h and only in 16.2% cases the time since death was more than 72 h, mostly recovered from barren land, forest and river. Most of the decomposed bodies were found in summer season (42.8%) with peak in the month of May. Despite technical difficulties in handling the body and artefactual alteration of the tissue, the decomposed body may still reveal cause and manner of death in significant number of cases. Copyright © 2011 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Exotic superconductivity with enhanced energy scales in materials with three band crossings

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Ping; Nandkishore, Rahul M.

    2018-04-01

    Three band crossings can arise in three-dimensional quantum materials with certain space group symmetries. The low energy Hamiltonian supports spin one fermions and a flat band. We study the pairing problem in this setting. We write down a minimal BCS Hamiltonian and decompose it into spin-orbit coupled irreducible pairing channels. We then solve the resulting gap equations in channels with zero total angular momentum. We find that in the s-wave spin singlet channel (and also in an unusual d-wave `spin quintet' channel), superconductivity is enormously enhanced, with a possibility for the critical temperature to be linear in interaction strength. Meanwhile, in the p-wave spin triplet channel, the superconductivity exhibits features of conventional BCS theory due to the absence of flat band pairing. Three band crossings thus represent an exciting new platform for realizing exotic superconducting states with enhanced energy scales. We also discuss the effects of doping, nonzero temperature, and of retaining additional terms in the k .p expansion of the Hamiltonian.

  14. Laboratory-scale bioremediation of oil-contaminated soil of Kuwait with soil amendment materials.

    PubMed

    Cho, B H; Chino, H; Tsuji, H; Kunito, T; Nagaoka, K; Otsuka, S; Yamashita, K; Matsumoto, S; Oyaizu, H

    1997-10-01

    A huge amount of oil-contaminated soil remains unremediated in the Kuwait desert. The contaminated oil has the potentiality to cause pollution of underground water and to effect the health of people in the neighborhood. In this study, laboratory scale bioremediation experiments were carried out. Hyponex (Hyponex, Inc.) and bark manure were added as basic nutrients for microorganisms, and twelve kinds of materials (baked diatomite, microporous glass, coconut charcoal, an oil-decomposing bacterial mixture (Formula X from Oppenheimer, Inc.), and eight kinds of surfactants) were applied to accelerate the biodegradation of oil hydrocarbons. 15% to 33% of the contaminated oil was decomposed during 43 weeks' incubation. Among the materials tested, coconut charcoal enhanced the biodegradation. On the contrary, the addition of an oil-decomposing bacterial mixture impeded the biodegradation. The effects of the other materials were very slight. The toxicity of the biodegraded compounds was estimated by the Ames test and the tea pollen tube growth test. Both of the hydrophobic (dichloromethane extracts) and hydrophilic (methanol extracts) fractions showed a very slight toxicity in the Ames test. In the tea pollen tube growth test, the hydrophobic fraction was not toxic and enhanced the growth of pollen tubes.

  15. Quantitative Diagnosis of Continuous-Valued, Stead-State Systems

    NASA Technical Reports Server (NTRS)

    Rouquette, N.

    1995-01-01

    Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.

  16. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  17. A hierarchy of generalized Jaulent-Miodek equations and their explicit solutions

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Guan, Liang; Xue, Bo

    A hierarchy of generalized Jaulent-Miodek (JM) equations related to a new spectral problem with energy-dependent potentials is proposed. Depending on the Lax matrix and elliptic variables, the generalized JM hierarchy is decomposed into two systems of solvable ordinary differential equations. Explicit theta function representations of the meromorphic function and the Baker-Akhiezer function are constructed, the solutions of the hierarchy are obtained based on the theory of algebraic curves.

  18. Quality improvement of diagnosis of the electromyography data based on statistical characteristics of the measured signals

    NASA Astrophysics Data System (ADS)

    Selivanova, Karina G.; Avrunin, Oleg G.; Zlepko, Sergii M.; Romanyuk, Sergii O.; Zabolotna, Natalia I.; Kotyra, Andrzej; Komada, Paweł; Smailova, Saule

    2016-09-01

    Research and systematization of motor disorders, taking into account the clinical and neurophysiologic phenomena, are important and actual problem of neurology. The article describes a technique for decomposing surface electromyography (EMG), using Principal Component Analysis. The decomposition is achieved by a set of algorithms that uses a specially developed for analyze EMG. The accuracy was verified by calculation of Mahalanobis distance and Probability error.

  19. A system decomposition approach to the design of functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, Tyrone; Trinh, Hieu

    2014-09-01

    This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.

  20. Bacterial decontamination using ambient pressure nonthermal discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birmingham, J.G.; Hammerstrom, D.J.

    2000-02-01

    Atmospheric pressure nonthermal plasmas can efficiently deactivate bacteria in gases, liquids, and on surfaces, as well as can decompose hazardous chemicals. This paper focuses on the changes to bacterial spores and toxic biochemical compounds, such as mycotoxins, after their treatment in ambient pressure discharges. The ability of nonthermal plasmas to decompose toxic chemicals and deactivate hazardous biological materials has been applied to sterilizing medical instruments, ozonating water, and purifying air. In addition, the fast lysis of bacterial spores and other cells has led us to include plasma devices within pathogen detection instruments, where nucleic acids must be accessed. Decontaminating chemicalmore » and biological warfare materials from large, high value targets such as building surfaces, after a terrorist attack, are especially challenging. A large area plasma decontamination technology is described.« less

  1. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketusky, E.; Subramanian, K.

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less

  2. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    PubMed

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  3. A facile self-assembly approach to prepare palladium/carbon nanotubes catalyst for the electro-oxidation of ethanol

    NASA Astrophysics Data System (ADS)

    Wen, Cuilian; Zhang, Xinyuan; Wei, Ying; Zhang, Teng; Chen, Changxin

    2018-02-01

    A facile self-assembly approach is reported to prepare palladium/carbon nanotubes (Pd/CNTs) catalyst for the electro-oxidation of ethanol. In this method, the Pd-oleate/CNTs was decomposed into the Pd/CNTs at an optimal temperature of 195 °C in air, in which no inert gas is needed for the thermal decomposition process due to the low temperature used and the decomposed products are also environmental friendly. The prepared Pd/CNTs catalyst has a high metallic Pd0 content and the Pd particles in the catalyst are disperse, uniform-sized with an average size of ˜2.1 nm, and evenly distributed on the CNTs. By employing our strategy, the problems including the exfoliation of the metal particles from the CNTs and the aggregation of the metal particles can be solved. Comparing with the commercial Pd/C one, the prepared Pd/CNTs catalyst exhibits a much higher electrochemical activity and stability for the electro-oxidation of ethanol in the direct ethanol fuel cells.

  4. Using "big data" to optimally model hydrology and water quality across expansive regions

    USGS Publications Warehouse

    Roehl, E.A.; Cook, J.B.; Conrads, P.A.

    2009-01-01

    This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.

  5. Accidental bait: do deceased fish increase freshwater turtle bycatch in commercial fyke nets?

    PubMed

    Larocque, Sarah M; Watson, Paige; Blouin-Demers, Gabriel; Cooke, Steven J

    2012-07-01

    Bycatch of turtles in passive inland fyke net fisheries has been poorly studied, yet bycatch is an important conservation issue given the decline in many freshwater turtle populations. Delayed maturity and low natural adult mortality make turtles particularly susceptible to population declines when faced with additional anthropogenic adult mortality such as bycatch. When turtles are captured in fyke nets, the prolonged submergence can lead to stress and subsequent drowning. Fish die within infrequently checked passive fishing nets and dead fish are a potential food source for many freshwater turtles. Dead fish could thus act as attractants and increase turtle captures in fishing nets. We investigated the attraction of turtles to decomposing fish within fyke nets in eastern Ontario. We set fyke nets with either 1 kg of one-day or five-day decomposed fish, or no decomposed fish in the cod-end of the net. Decomposing fish did not alter the capture rate of turtles or fish, nor did it alter the species composition of the catch. Thus, reducing fish mortality in nets using shorter soak times is unlikely to alter turtle bycatch rates since turtles were not attracted by the dead fish. Interestingly, turtle bycatch rates increased as water temperatures did. Water temperature also influences turtle mortality by affecting the duration turtles can remain submerged. We thus suggest that submerged nets to either not be set or have reduced soak times in warm water conditions (e.g., >20 °C) as turtles tend to be captured more frequently and cannot withstand prolonged submergence.

  6. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  7. Illustrating chaos: a schematic discretization of the general three-body problem in Newtonian gravity

    NASA Astrophysics Data System (ADS)

    Leigh, Nathan W. C.; Wegsman, Shalma

    2018-05-01

    We present a formalism for constructing schematic diagrams to depict chaotic three-body interactions in Newtonian gravity. This is done by decomposing each interaction into a series of discrete transformations in energy- and angular momentum-space. Each time a transformation is applied, the system changes state as the particles re-distribute their energy and angular momenta. These diagrams have the virtue of containing all of the quantitative information needed to fully characterize most bound or unbound interactions through time and space, including the total duration of the interaction, the initial and final stable states in addition to every intervening temporary meta-stable state. As shown via an illustrative example for the bound case, prolonged excursions of one of the particles, which by far dominates the computational cost of the simulations, are reduced to a single discrete transformation in energy- and angular momentum-space, thereby potentially mitigating any computational expense. We further generalize our formalism to sequences of (unbound) three-body interactions, as occur in dense stellar environments during binary hardening. Finally, we provide a method for dynamically evolving entire populations of binaries via three-body scattering interactions, using a purely analytic formalism. In principle, the techniques presented here are adaptable to other three-body problems that conserve energy and angular momentum.

  8. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  9. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  10. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability.

    PubMed

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-09-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf 'afterlife' integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence.

  11. Characterizing the Fundamental Intellectual Steps Required in the Solution of Conceptual Problems

    NASA Astrophysics Data System (ADS)

    Stewart, John

    2010-02-01

    At some level, the performance of a science class must depend on what is taught, the information content of the materials and assignments of the course. The introductory calculus-based electricity and magnetism class at the University of Arkansas is examined using a catalog of the basic reasoning steps involved in the solution of problems assigned in the class. This catalog was developed by sampling popular physics textbooks for conceptual problems. The solution to each conceptual problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content within the course. Using this characterization technique, an exceptionally detailed picture of the information flow and structure of the class can be produced. The intellectual structure of published conceptual inventories is compared with the information presented in the class and the dependence of conceptual performance on the details of coverage extracted. )

  12. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  13. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  14. Identification of the Radiative and Nonradiative Parts of a Wave Field

    NASA Astrophysics Data System (ADS)

    Hoenders, B. J.; Ferwerda, H. A.

    2001-08-01

    We present a method for decomposing a wave field, described by a second-order ordinary differential equation, into a radiative component and a nonradiative one, using a biorthonormal system related to the problem under consideration. We show that it is possible to select a special system such that the wave field is purely radiating. We discuss the differences and analogies with approaches which, unlike our approach, start from the corresponding sources of the field.

  15. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  16. Human reinforcement learning subdivides structured action spaces by learning effector-specific values

    PubMed Central

    Gershman, Samuel J.; Pesaran, Bijan; Daw, Nathaniel D.

    2009-01-01

    Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable, due to the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning – such as prediction error signals for action valuation associated with dopamine and the striatum – can cope with this “curse of dimensionality.” We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and BOLD activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to “divide and conquer” reinforcement learning over high-dimensional action spaces. PMID:19864565

  17. Human reinforcement learning subdivides structured action spaces by learning effector-specific values.

    PubMed

    Gershman, Samuel J; Pesaran, Bijan; Daw, Nathaniel D

    2009-10-28

    Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable because of the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning-such as prediction error signals for action valuation associated with dopamine and the striatum-can cope with this "curse of dimensionality." We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and blood oxygen level-dependent (BOLD) activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to "divide and conquer" reinforcement learning over high-dimensional action spaces.

  18. Effect of carboxymethyl cellulose (CMC) as biopolymers to the edible film sorghum starch hydrophobicity characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Rr. Dewi Artanti; Setiawan, Aji; Anggraini, Puji D.

    2017-03-01

    The use of synthetic plastic should be limited because it causes the plastic waste that can not be decomposed quickly, triggering environmental problems. The solution of the plastic usage is the use of biodegradable plastic as packaging which is environmentally friendly. Synthesis of edible film can be done with a variety of components. The component mixture of starch and cellulose derivative products are one of the methods for making edible film. Sorghum is a species of cereal crops containing starch amounted to 80.42%, where the use of sorghum in Indonesia merely fodder. Therefore, sorghum is a potential material to be used as a source of starch synthesis edible film. This research aims to study the characteristics of edible starch films Sorghum and assess the effect of CMC (Carboxymethyl Cellulose) as additional materials on the characteristics of biopolymers edible film produced sorghum starch. This study is started with the production of sorghum starch, then the film synthesizing with addition of CMC (5, 10, 15, 20, and 25% w/w starch), and finally the hydrophobicity characteristics test (water uptake test and water solubility test). The addition of CMC will decrease the percentage of water absorption to the film with lowest level of 65.8% in the degree of CMC in 25% (w/w starch). The addition of CMC also influences the water solubility of film, where in the degree of 25% CMC (w/w starch) the solubility of water was the lowest, which was 28.2% TSM.

  19. Enhanced Chemical Cleaning: A New Process for Chemically Cleaning Savannah River Waste Tanks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketusky, Edward; Spires, Renee; Davis, Neil

    2009-02-11

    At the Savannah River Site (SRS) there are 49 High Level Waste (HLW) tanks that eventually must be emptied, cleaned, and closed. The current method of chemically cleaning SRS HLW tanks, commonly referred to as Bulk Oxalic Acid Cleaning (BOAC), requires about a half million liters (130,000 gallons) of 8 weight percent (wt%) oxalic acid to clean a single tank. During the cleaning, the oxalic acid acts as the solvent to digest sludge solids and insoluble salt solids, such that they can be suspended and pumped out of the tank. Because of the volume and concentration of acid used, amore » significant quantity of oxalate is added to the HLW process. This added oxalate significantly impacts downstream processing. In addition to the oxalate, the volume of liquid added competes for the limited available tank space. A search, therefore, was initiated for a new cleaning process. Using TRIZ (Teoriya Resheniya Izobretatelskikh Zadatch or roughly translated as the Theory of Inventive Problem Solving), Chemical Oxidation Reduction Decontamination with Ultraviolet Light (CORD-UV{reg_sign}), a mature technology used in the commercial nuclear power industry was identified as an alternate technology. Similar to BOAC, CORD-UV{reg_sign} also uses oxalic acid as the solvent to dissolve the metal (hydr)oxide solids. CORD-UV{reg_sign} is different, however, since it uses photo-oxidation (via peroxide/UV or ozone/UV to form hydroxyl radicals) to decompose the spent oxalate into carbon dioxide and water. Since the oxalate is decomposed and off-gassed, CORD-UV{reg_sign} would not have the negative downstream oxalate process impacts of BOAC. With the oxalate destruction occurring physically outside the HLW tank, re-precipitation and transfer of the solids, as well as regeneration of the cleaning solution can be performed without adding additional solids, or a significant volume of liquid to the process. With a draft of the pre-conceptual Enhanced Chemical Cleaning (ECC) flowsheet, taking full advantage of the many CORD-UV{reg_sign} benefits, performance demonstration testing was initiated using available SRS sludge simulant. The demonstration testing confirmed that ECC is a viable technology, as it can dissolve greater than 90% of the sludge simulant and destroy greater than 90% of the oxalates. Additional simulant and real waste testing are planned.« less

  20. X-ray EM simulation tool for ptychography dataset construction

    NASA Astrophysics Data System (ADS)

    Stoevelaar, L. Pjotr; Gerini, Giampiero

    2018-03-01

    In this paper, we present an electromagnetic full-wave modeling framework, as a support EM tool providing data sets for X-ray ptychographic imaging. Modeling the entire scattering problem with Finite Element Method (FEM) tools is, in fact, a prohibitive task, because of the large area illuminated by the beam (due to the poor focusing power at these wavelengths) and the very small features to be imaged. To overcome this problem, the spectrum of the illumination beam is decomposed into a discrete set of plane waves. This allows reducing the electromagnetic modeling volume to the one enclosing the area to be imaged. The total scattered field is reconstructed by superimposing the solutions for each plane wave illumination.

  1. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability

    PubMed Central

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-01-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf ‘afterlife’ integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence. PMID:25535551

  2. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  3. The genome sequence of Dyella jiangningensis FCAV SCS01 from a lignocellulose-decomposing microbial consortium metagenome reveals potential for biotechnological applications.

    PubMed

    Desiderato, Joana G; Alvarenga, Danillo O; Constancio, Milena T L; Alves, Lucia M C; Varani, Alessandro M

    2018-05-14

    Cellulose and its associated polymers are structural components of the plant cell wall, constituting one of the major sources of carbon and energy in nature. The carbon cycle is dependent on cellulose- and lignin-decomposing microbial communities and their enzymatic systems acting as consortia. These microbial consortia are under constant exploration for their potential biotechnological use. Herein, we describe the characterization of the genome of Dyella jiangningensis FCAV SCS01, recovered from the metagenome of a lignocellulose-degrading microbial consortium, which was isolated from a sugarcane crop soil under mechanical harvesting and covered by decomposing straw. The 4.7 Mbp genome encodes 4,194 proteins, including 36 glycoside hydrolases (GH), supporting the hypothesis that this bacterium may contribute to lignocellulose decomposition. Comparative analysis among fully sequenced Dyella species indicate that the genome synteny is not conserved, and that D. jiangningensis FCAV SCS01 carries 372 unique genes, including an alpha-glucosidase and maltodextrin glucosidase coding genes, and other potential biomass degradation related genes. Additional genomic features, such as prophage-like, genomic islands and putative new biosynthetic clusters were also uncovered. Overall, D. jiangningensis FCAV SCS01 represents the first South American Dyella genome sequenced and shows an exclusive feature among its genus, related to biomass degradation.

  4. Experimentally simulated global warming and nitrogen enrichment effects on microbial litter decomposers in a marsh.

    PubMed

    Flury, Sabine; Gessner, Mark O

    2011-02-01

    Atmospheric warming and increased nitrogen deposition can lead to changes of microbial communities with possible consequences for biogeochemical processes. We used an enclosure facility in a freshwater marsh to assess the effects on microbes associated with decomposing plant litter under conditions of simulated climate warming and pulsed nitrogen supply. Standard batches of litter were placed in coarse-mesh and fine-mesh bags and submerged in a series of heated, nitrogen-enriched, and control enclosures. They were retrieved later and analyzed for a range of microbial parameters. Fingerprinting profiles obtained by denaturing gradient gel electrophoresis (DGGE) indicated that simulated global warming induced a shift in bacterial community structure. In addition, warming reduced fungal biomass, whereas bacterial biomass was unaffected. The mesh size of the litter bags and sampling date also had an influence on bacterial community structure, with the apparent number of dominant genotypes increasing from spring to summer. Microbial respiration was unaffected by any treatment, and nitrogen enrichment had no clear effect on any of the microbial parameters considered. Overall, these results suggest that microbes associated with decomposing plant litter in nutrient-rich freshwater marshes are resistant to extra nitrogen supplies but are likely to respond to temperature increases projected for this century.

  5. Thermal Stability of Fluorinated Polydienes Synthesized by Addition of Difluorocarbene

    DTIC Science & Technology

    2012-01-01

    polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads...polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads to... dehalogenation . The pyrolysis leads to graphite-like residues, whereas their polydiene precursors decompose completely under the same conditions. The

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  7. The nature of the transitory product in the gas-phase ozonolysis of ethene

    NASA Astrophysics Data System (ADS)

    Neeb, Peter; Horie, Osamu; Moortgat, Geert K.

    1995-11-01

    One of the reactants for the formation of previously identified transitory product in the gas-phase ozonolysis of C 2H 4 was shown to be HCOOH. The most probable structure of this compound is HOOCH 2OCHO. Its concentration increased with the addition of HCOOH but decreased with the addition of HCHO which had previously been assumed as one of the reactants. This compound slowly decomposed to formic acid anhydride and water.

  8. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  9. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  10. Plant diversity does not buffer drought effects on early-stage litter mass loss rates and microbial properties.

    PubMed

    Vogel, Anja; Eisenhauer, Nico; Weigelt, Alexandra; Scherer-Lorenzen, Michael

    2013-09-01

    Human activities are decreasing biodiversity and changing the climate worldwide. Both global change drivers have been shown to affect ecosystem functioning, but they may also act in concert in a non-additive way. We studied early-stage litter mass loss rates and soil microbial properties (basal respiration and microbial biomass) during the summer season in response to plant species richness and summer drought in a large grassland biodiversity experiment, the Jena Experiment, Germany. In line with our expectations, decreasing plant diversity and summer drought decreased litter mass loss rates and soil microbial properties. In contrast to our hypotheses, however, this was only true for mass loss of standard litter (wheat straw) used in all plots, and not for plant community-specific litter mass loss. We found no interactive effects between global change drivers, that is, drought reduced litter mass loss rates and soil microbial properties irrespective of plant diversity. High mass loss rates of plant community-specific litter and low responsiveness to drought relative to the standard litter indicate that soil microbial communities were adapted to decomposing community-specific plant litter material including lower susceptibility to dry conditions during summer months. Moreover, higher microbial enzymatic diversity at high plant diversity may have caused elevated mass loss of standard litter. Our results indicate that plant diversity loss and summer drought independently impede soil processes. However, soil decomposer communities may be highly adapted to decomposing plant community-specific litter material, even in situations of environmental stress. Results of standard litter mass loss moreover suggest that decomposer communities under diverse plant communities are able to cope with a greater variety of plant inputs possibly making them less responsive to biotic changes. © 2013 John Wiley & Sons Ltd.

  11. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  12. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  13. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  14. Lac Qui Parle Flood Control Project Master Plan for Public Use Development and Resource Management.

    DTIC Science & Technology

    1980-08-01

    the project area is the disposal of dead carp. Minnesota fishing regulations prohibit fishermen from returning rough fish to lakes or rivers after...in trash cans. Unless the dead fish are removed virtually daily, they begin to decompose and smell. Due to current work- force constraints, the Corps...is unable to remove the dead fish as often as it would like. No easy solution to this problem is apparent. 6.25 Potential for Future Development The

  15. A Heuristic Fast Method to Solve the Nonlinear Schroedinger Equation in Fiber Bragg Gratings with Arbitrary Shape Input Pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, F.; Hatami, M.; Keshavarz, A. R.

    2009-08-13

    Using a combination of Runge-Kutta and Jacobi iterative method, we could solve the nonlinear Schroedinger equation describing the pulse propagation in FBGs. By decomposing the electric field to forward and backward components in fiber Bragg grating and utilizing the Fourier series analysis technique, the boundary value problem of a set of coupled equations governing the pulse propagation in FBG changes to an initial condition coupled equations which can be solved by simple Runge-Kutta method.

  16. Decomposition of the linking number of a closed ribbon: A problem from molecular biology

    PubMed Central

    Fuller, F. Brock

    1978-01-01

    A closed duplex DNA molecule relaxed and containing nucleosomes has a different linking number from the same molecule relaxed and without nucleosomes. What does this say about the structure of the nucleosome? A mathematical study of this question is made, representing the DNA molecule by a ribbon. It is shown that the linking number of a closed ribbon can be decomposed into the linking number of a reference ribbon plus a sum of locally determined “linking differences.” PMID:16592550

  17. Distributed Multi-Cell Resource Allocation with Price Based ICI Coordination in Downlink OFDMA Networks

    NASA Astrophysics Data System (ADS)

    Lv, Gangming; Zhu, Shihua; Hui, Hui

    Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.

  18. Northeast Artificial Intelligence Consortium (NAIC). Volume 15. Strategies for Coupling Symbolic and Numerical Computation in Knowledge Base Systems

    DTIC Science & Technology

    1990-12-01

    Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an

  19. Influence of nitrogen additions on litter decomposition, nutrient dynamics, and enzymatic activity of two plant species in a peatland in Northeast China.

    PubMed

    Song, Yanyu; Song, Changchun; Ren, Jiusheng; Tan, Wenwen; Jin, Shaofei; Jiang, Lei

    2018-06-01

    Nitrogen (N) availability affects litter decomposition and nutrient dynamics, especially in N-limited ecosystems. We investigated the response of litter decomposition to N additions in Eriophorum vaginatum and Vaccinium uliginosum peatlands. These two species dominate peatlands in Northeast China. In 2012, mesh bags containing senesced leaf litter of Eriophorum vaginatum and Vaccinium uliginosum were placed in N addition plots and sprayed monthly for two years with NH 4 NO 3 solution at dose rates of 0, 6, 12, and 24gNm -2 year -1 (CK, N1, N2 and N3, respectively). Mass loss, N and phosphorus (P) content, and enzymatic activity were measured over time as litter decomposed. In the control plots, V. uliginosum litter decomposed faster than E. vaginatum litter. N1, N2, and N3 treatments increased the mass losses of V. uliginosum litter by 6%, 9%, and 4% respectively, when compared with control. No significant influence of N additions was found on the decomposition of E. vaginatum litter. However, N and P content in E. vaginatum litter and V. uliginosum litter significantly increased with N additions. Moreover, N additions significantly promoted invertase and β-glucosidase activity in E. vaginatum and V. uliginosum litter. However, only in V. uliginosum litter was polyphenol oxidase activity significantly enhanced. Our results showed that initial litter quality and polyphenol oxidase activity influence the response of plant litter to N additions in peatland ecosystems. Increased N availability may change peatland soil N and P cycling by enhancing N and P immobilization during litter decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Limited Effects of Variable-Retention Harvesting on Fungal Communities Decomposing Fine Roots in Coastal Temperate Rainforests.

    PubMed

    Philpott, Timothy J; Barker, Jason S; Prescott, Cindy E; Grayston, Sue J

    2018-02-01

    Fine root litter is the principal source of carbon stored in forest soils and a dominant source of carbon for fungal decomposers. Differences in decomposer capacity between fungal species may be important determinants of fine-root decomposition rates. Variable-retention harvesting (VRH) provides refuge for ectomycorrhizal fungi, but its influence on fine-root decomposers is unknown, as are the effects of functional shifts in these fungal communities on carbon cycling. We compared fungal communities decomposing fine roots (in litter bags) under VRH, clear-cut, and uncut stands at two sites (6 and 13 years postharvest) and two decay stages (43 days and 1 year after burial) in Douglas fir forests in coastal British Columbia, Canada. Fungal species and guilds were identified from decomposed fine roots using high-throughput sequencing. Variable retention had short-term effects on β-diversity; harvest treatment modified the fungal community composition at the 6-year-postharvest site, but not at the 13-year-postharvest site. Ericoid and ectomycorrhizal guilds were not more abundant under VRH, but stand age significantly structured species composition. Guild composition varied by decay stage, with ruderal species later replaced by saprotrophs and ectomycorrhizae. Ectomycorrhizal abundance on decomposing fine roots may partially explain why fine roots typically decompose more slowly than surface litter. Our results indicate that stand age structures fine-root decomposers but that decay stage is more important in structuring the fungal community than shifts caused by harvesting. The rapid postharvest recovery of fungal communities decomposing fine roots suggests resiliency within this community, at least in these young regenerating stands in coastal British Columbia. IMPORTANCE Globally, fine roots are a dominant source of carbon in forest soils, yet the fungi that decompose this material and that drive the sequestration or respiration of this carbon remain largely uncharacterized. Fungi vary in their capacity to decompose plant litter, suggesting that fungal community composition is an important determinant of decomposition rates. Variable-retention harvesting is a forestry practice that modifies fungal communities by providing refuge for ectomycorrhizal fungi. We evaluated the effects of variable retention and clear-cut harvesting on fungal communities decomposing fine roots at two sites (6 and 13 years postharvest), at two decay stages (43 days and 1 year), and in uncut stands in temperate rainforests. Harvesting impacts on fungal community composition were detected only after 6 years after harvest. We suggest that fungal community composition may be an important factor that reduces fine-root decomposition rates relative to those of above-ground plant litter, which has important consequences for forest carbon cycling. Copyright © 2018 American Society for Microbiology.

  1. Phase diagram of ammonium nitrate

    NASA Astrophysics Data System (ADS)

    Dunuwille, Mihindra; Yoo, Choong-Shik

    2013-12-01

    Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO-AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N2, N2O, and H2O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV' transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C.

  2. Electrochemical alkaline Fe(VI) water purification and remediation.

    PubMed

    Licht, Stuart; Yu, Xingwen

    2005-10-15

    Fe(VI) is an unusual and strongly oxidizing form of iron, which provides a potentially less hazardous water-purifying agent than chlorine. A novel on-line electrochemical Fe(VI) water purification methodology is introduced. Fe(VI) addition had been a barrier to its effective use in water remediation, because solid Fe(VI) salts require complex (costly) syntheses steps and solutions of Fe(VI) decompose. Online electrochemical Fe(VI) water purification avoids these limitations, in which Fe(VI) is directly prepared in solution from an iron anode as the FeO42- ion, and is added to the contaminant stream. Added FeO42- decomposes, by oxidizing a wide range of water contaminants including sulfides (demonstrated in this study) and other sulfur-containing compounds, cyanides (demonstrated in this study), arsenic (demonstrated in this study), ammonia and other nitrogen-containing compounds (previously demonstrated), a wide range of organics (phenol demonstrated in this study), algae, and viruses (each previously demonstrated).

  3. A phylogenetic transform enhances analysis of compositional microbiota data.

    PubMed

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-02-15

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.

  4. "Going to town": Large-scale norming and statistical analysis of 870 American English idioms.

    PubMed

    Bulkes, Nyssa Z; Tanner, Darren

    2017-04-01

    An idiom is classically defined as a formulaic sequence whose meaning is comprised of more than the sum of its parts. For this reason, idioms pose a unique problem for models of sentence processing, as researchers must take into account how idioms vary and along what dimensions, as these factors can modulate the ease with which an idiomatic interpretation can be activated. In order to help ensure external validity and comparability across studies, idiom research benefits from the availability of publicly available resources reporting ratings from a large number of native speakers. Resources such as the one outlined in the current paper facilitate opportunities for consensus across studies on idiom processing and help to further our goals as a research community. To this end, descriptive norms were obtained for 870 American English idioms from 2,100 participants along five dimensions: familiarity, meaningfulness, literal plausibility, global decomposability, and predictability. Idiom familiarity and meaningfulness strongly correlated with one another, whereas familiarity and meaningfulness were positively correlated with both global decomposability and predictability. Correlations with previous norming studies are also discussed.

  5. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  6. Natural 13C abundance reveals trophic status of fungi and host-origin of carbon in mycorrhizal fungi in mixed forests

    PubMed Central

    Högberg, Peter; Plamboeck, Agneta H.; Taylor, Andrew F. S.; Fransson, Petra M. A.

    1999-01-01

    Fungi play crucial roles in the biogeochemistry of terrestrial ecosystems, most notably as saprophytes decomposing organic matter and as mycorrhizal fungi enhancing plant nutrient uptake. However, a recurrent problem in fungal ecology is to establish the trophic status of species in the field. Our interpretations and conclusions are too often based on extrapolations from laboratory microcosm experiments or on anecdotal field evidence. Here, we used natural variations in stable carbon isotope ratios (δ13C) as an approach to distinguish between fungal decomposers and symbiotic mycorrhizal fungal species in the rich sporocarp flora (our sample contains 135 species) of temperate forests. We also demonstrated that host-specific mycorrhizal fungi that receive C from overstorey or understorey tree species differ in their δ13C. The many promiscuous mycorrhizal fungi, associated with and connecting several tree hosts, were calculated to receive 57–100% of their C from overstorey trees. Thus, overstorey trees also support, partly or wholly, the nutrient-absorbing mycelia of their alleged competitors, the understorey trees. PMID:10411910

  7. Release of elicitors from rice blast spores under the action of reactive oxygen species

    USDA-ARS?s Scientific Manuscript database

    The effects of reactive oxygen species (ROS) on secretion of hypothesized elicitors from spores of rice blast causal fungus Magnaporthe grisea were studied. For spore exposure to exogenous ROS, they were germinated for 5 h in 50 µM H2O2 followed by addition of catalase E.C. 1.11.1.6 (to decompose pe...

  8. Our World without Decomposers: How Scary!

    ERIC Educational Resources Information Center

    Spring, Patty; Harr, Natalie

    2014-01-01

    Bugs, slugs, bacteria, and fungi are decomposers at the heart of every ecosystem. Fifth graders at Dodge Intermediate School in Twinsburg, Ohio, ventured outdoors to learn about the necessity of these amazing organisms. With the help of a naturalist, students explored their local park and discovered the wonder of decomposers and their…

  9. Destruction of inorganic municipal solid waste incinerator fly ash in a DC arc plasma furnace.

    PubMed

    Zhao, Peng; Ni, Guohua; Jiang, Yiman; Chen, Longwei; Chen, Mingzhou; Meng, Yuedong

    2010-09-15

    Due to the toxicity of dioxins, furans and heavy metals, there is a growing environmental concern on municipal solid waste incinerator (MSWI) fly ash in China. The purpose of this study is directed towards the volume-reduction of fly ash without any additive by thermal plasma and recycling of vitrified slag. This process uses extremely high-temperature in an oxygen-starved environment to completely decompose complex waste into very simple molecules. For developing the proper plasma processes to treat MSWI fly ash, a new crucible-type plasma furnace was built. The melting process metamorphosed fly ash to granulated slag that was less than 1/3 of the volume of the fly ash, and about 64% of the weight of the fly ash. The safety of the vitrified slag was tested. The properties of the slag were affected by the differences in the cooling methods. Water-cooled and composite-cooled slag showed more excellent resistance against the leaching of heavy metals and can be utilized as building material without toxicity problems. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Ethylene-Vinyl Acetate Potential Problems for Photovoltaic Packaging: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempe, M. D.; Jorgensen, G. J.; Terwilliger, K. M.

    2006-05-01

    Photovoltaic (PV) devices are typically encapsulated using ethylene-vinyl acetate (EVA) to provide mechanical support, optical coupling, electrical isolation, and protection against environmental exposure. Under exposure to atmospheric water and/or ultraviolet radiation, EVA will decompose to produce acetic acid, lowering the pH and increasing the surface corrosion rates of embedded devices. Even though acetic acid is produced at a very slow rate, it may not take much to catalyze reactions that lead to rapid module deterioration. Another consideration is that the glass transition of EVA, as measured using dynamic mechanical analysis, begins at temperatures of about ?15 C. Temperatures lower thanmore » this can be reached for extended periods of time in some climates. Because of increased moduli below the glass transition temperature, a module may be more vulnerable to damage if a mechanical load is applied by snow or wind at low temperatures. Modules using EVA should not be rated for use at such low temperatures without additional low-temperature mechanical testing beyond the scope of UL 1703.« less

  11. Ethylene-Vinyl Acetate Potential Problems for Photovoltaic Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempe, M. D.; Jorgensen, G. J.; Terwilliger, K. M.

    2006-01-01

    Photovoltaic (PV) devices are typically encapsulated using ethylene-vinyl acetate (EVA) to provide mechanical support, optical coupling, electrical isolation, and protection against environmental exposure. Under exposure to atmospheric water and/or ultraviolet radiation, EVA will decompose to produce acetic acid, lowering the pH and increasing the surface corrosion rates of embedded devices. Even though acetic acid is produced at a very slow rate, it may not take much to catalyze reactions that lead to rapid module deterioration. Another consideration is that the glass transition of EVA, as measured using dynamic mechanical analysis, begins at temperatures of about -15 degC. Temperatures lower thanmore » this can be reached for extended periods of time in some climates. Because of increased moduli below the glass transition temperature, a module may be more vulnerable to damage if a mechanical load is applied by snow or wind at low temperatures. Modules using EVA should not be rated for use at such low temperatures without additional low-temperature mechanical testing beyond the scope of UL1703.« less

  12. Potential Problems with Ethylene-Vinyl Acetate for Photovoltaic Packaging (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempe, M. D.; Jorgensen, G. J.; Terwilliger, K, M.

    2006-05-01

    Photovoltaic (PV) devices are typically encapsulated using ethylene-vinyl acetate (EVA) to provide mechanical support electrical isolation, optical coupling, and protection against environmental exposure. Under exposure to atmospheric water and/or ultraviolet radiation, EVA will decompose to produce acetic acid, lowering the pH and increasing the surface corrosion rates of embedded devices. Even though acetic acid is produced at a very slow rate it may not take much to catalyze reactions that lead to rapid module deterioration. Another consideration is that the glass transition of EVA, as measured using dynamic mechanical analysis, begins at temperatures of about -15 C. Temperatures lower thanmore » this can be reached for extended periods of time in some climates. Due to increased moduli below the glass transition temperature, a module may be more vulnerable to damage if a mechanical load is applied by snow or wind at low temperatures. Modules using EVA should not be rated for use at such low temperatures without additional low-temperature mechanical testing beyond the scope of UL 1703.« less

  13. Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.

    PubMed

    Ma, Jun; Wu, Jiande; Wang, Xiaodong

    2018-06-04

    Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.

  14. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  15. Hierarchical coarse-graining transform.

    PubMed

    Pancaldi, Vera; King, Peter R; Christensen, Kim

    2009-03-01

    We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.

  16. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  17. Multi-blocking strategies for the INS3D incompressible Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Gatlin, Boyd

    1990-01-01

    With the continuing development of bigger and faster supercomputers, computational fluid dynamics (CFD) has become a useful tool for real-world engineering design and analysis. However, the number of grid points necessary to resolve realistic flow fields numerically can easily exceed the memory capacity of available computers. In addition, geometric shapes of flow fields, such as those in the Space Shuttle Main Engine (SSME) power head, may be impossible to fill with continuous grids upon which to obtain numerical solutions to the equations of fluid motion. The solution to this dilemma is simply to decompose the computational domain into subblocks of manageable size. Computer codes that are single-block by construction can be modified to handle multiple blocks, but ad-hoc changes in the FORTRAN have to be made for each geometry treated. For engineering design and analysis, what is needed is generalization so that the blocking arrangement can be specified by the user. INS3D is a computer program for the solution of steady, incompressible flow problems. It is used frequently to solve engineering problems in the CFD Branch at Marshall Space Flight Center. INS3D uses an implicit solution algorithm and the concept of artificial compressibility to provide the necessary coupling between the pressure field and the velocity field. The development of generalized multi-block capability in INS3D is described.

  18. Guided wave localization of damage via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Levine, Ross M.; Michaels, Jennifer E.; Lee, Sang Jun

    2012-05-01

    Ultrasonic guided waves are frequently applied for structural health monitoring and nondestructive evaluation of plate-like metallic and composite structures. Spatially distributed arrays of fixed piezoelectric transducers can be used to detect damage by recording and analyzing all pairwise signal combinations. By subtracting pre-recorded baseline signals, the effects due to scatterer interactions can be isolated. Given these residual signals, techniques such as delay-and-sum imaging are capable of detecting flaws, but do not exploit the expected sparse nature of damage. It is desired to determine the location of a possible flaw by leveraging the anticipated sparsity of damage; i.e., most of the structure is assumed to be damage-free. Unlike least-squares methods, L1-norm minimization techniques favor sparse solutions to inverse problems such as the one considered here of locating damage. Using this type of method, it is possible to exploit sparsity of damage by formulating the imaging process as an optimization problem. A model-based damage localization method is presented that simultaneously decomposes all scattered signals into location-based signal components. The method is first applied to simulated data to investigate sensitivity to both model mismatch and additive noise, and then to experimental data recorded from an aluminum plate with artificial damage. Compared to delay-and-sum imaging, results exhibit a significant reduction in both spot size and imaging artifacts when the model is reasonably well-matched to the data.

  19. Energy index decomposition methodology at the plant level

    NASA Astrophysics Data System (ADS)

    Kumphai, Wisit

    Scope and method of study. The dissertation explores the use of a high level energy intensity index as a facility-level energy performance monitoring indicator with a goal of developing a methodology for an economically based energy performance monitoring system that incorporates production information. The performance measure closely monitors energy usage, production quantity, and product mix and determines the production efficiency as a part of an ongoing process that would enable facility managers to keep track of and, in the future, be able to predict when to perform a recommissioning process. The study focuses on the use of the index decomposition methodology and explored several high level (industry, sector, and country levels) energy utilization indexes, namely, Additive Log Mean Divisia, Multiplicative Log Mean Divisia, and Additive Refined Laspeyres. One level of index decomposition is performed. The indexes are decomposed into Intensity and Product mix effects. These indexes are tested on a flow shop brick manufacturing plant model in three different climates in the United States. The indexes obtained are analyzed by fitting an ARIMA model and testing for dependency between the two decomposed indexes. Findings and conclusions. The results concluded that the Additive Refined Laspeyres index decomposition methodology is suitable to use on a flow shop, non air conditioned production environment as an energy performance monitoring indicator. It is likely that this research can be further expanded in to predicting when to perform a recommissioning process.

  20. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    PubMed

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  1. Integrated boiler, superheater, and decomposer for sulfuric acid decomposition

    DOEpatents

    Moore, Robert [Edgewood, NM; Pickard, Paul S [Albuquerque, NM; Parma, Jr., Edward J.; Vernon, Milton E [Albuquerque, NM; Gelbard, Fred [Albuquerque, NM; Lenard, Roger X [Edgewood, NM

    2010-01-12

    A method and apparatus, constructed of ceramics and other corrosion resistant materials, for decomposing sulfuric acid into sulfur dioxide, oxygen and water using an integrated boiler, superheater, and decomposer unit comprising a bayonet-type, dual-tube, counter-flow heat exchanger with a catalytic insert and a central baffle to increase recuperation efficiency.

  2. Procedures for Decomposing a Redox Reaction into Half-Reaction

    ERIC Educational Resources Information Center

    Fishtik, Ilie; Berka, Ladislav H.

    2005-01-01

    A simple algorithm for a complete enumeration of the possible ways a redox reaction (RR) might be uniquely decomposed into half-reactions (HRs) using the response reactions (RERs) formalism is presented. A complete enumeration of the possible ways a RR may be decomposed into HRs is equivalent to a complete enumeration of stoichiometrically…

  3. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  4. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  5. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  6. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  7. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  8. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  9. Coordinated Platoon Routing in a Metropolitan Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Jeffrey; Munson, Todd; Sokolov, Vadim

    2016-10-10

    Platooning vehicles—connected and automated vehicles traveling with small intervehicle distances—use less fuel because of reduced aerodynamic drag. Given a network de- fined by vertex and edge sets and a set of vehicles with origin/destination nodes/times, we model and solve the combinatorial optimization problem of coordinated routing of vehicles in a manner that routes them to their destination on time while using the least amount of fuel. Common approaches decompose the platoon coordination and vehicle routing into separate problems. Our model addresses both problems simultaneously to obtain the best solution. We use modern modeling techniques and constraints implied from analyzing themore » platoon routing problem to address larger numbers of vehicles and larger networks than previously considered. While the numerical method used is unable to certify optimality for candidate solutions to all networks and parameters considered, we obtain excellent solutions in approximately one minute for much larger networks and vehicle sets than previously considered in the literature.« less

  10. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  11. Normative data for idiomatic expressions.

    PubMed

    Nordmann, Emily; Jambazova, Antonia A

    2017-02-01

    Idiomatic expressions such as kick the bucket or go down a storm can differ on a number of internal features, such as familiarity, meaning, literality, and decomposability, and these types of features have been the focus of a number of normative studies. In this article, we provide normative data for a set of Bulgarian idioms and their English translations, and by doing so replicate in a Slavic language the relationships between the ratings previously found in Romance and Germanic languages. Additionally, we compared whether collecting these types of ratings in between-subjects or within-subjects designs affects the data and the conclusions drawn, and found no evidence that design type affects the final outcome. Finally, we present the results of a meta-analysis that summarizes the relationships found across the literature. As in many previous individual studies, we found that familiarity correlates with a number of other features; however, such studies have shown conflicting results concerning literality and decomposability ratings. The meta-analysis revealed reliable relationships of decomposability with a number of other measures, such as familiarity, meaning, and predictability. Conversely, literality was shown to have little to no relationship with any of the other subjective ratings. The implications for these relationships in the context of the wider experimental literature are discussed, with a particular focus on the importance of attaining familiarity ratings for each sample of participants in experimental work.

  12. Decomposer food web in a deciduous forest shows high share of generalist microorganisms and importance of microbial biomass recycling.

    PubMed

    López-Mondéjar, Ruben; Brabcová, Vendula; Štursová, Martina; Davidová, Anna; Jansa, Jan; Cajthaml, Tomaš; Baldrian, Petr

    2018-06-01

    Forest soils represent important terrestrial carbon (C) pools where C is primarily fixed in the plant-derived biomass but it flows further through the biomass of fungi and bacteria before it is lost from the ecosystem as CO 2 or immobilized in recalcitrant organic matter. Microorganisms are the main drivers of C flow in forests and play critical roles in the C balance through the decomposition of dead biomass of different origins. Here, we track the path of C that enters forest soil by following respiration, microbial biomass production, and C accumulation by individual microbial taxa in soil microcosms upon the addition of 13 C-labeled biomass of plant, fungal, and bacterial origin. We demonstrate that both fungi and bacteria are involved in the assimilation and mineralization of C from the major complex sources existing in soil. Decomposer fungi are, however, better suited to utilize plant biomass compounds, whereas the ability to utilize fungal and bacterial biomass is more frequent among bacteria. Due to the ability of microorganisms to recycle microbial biomass, we suggest that the decomposer food web in forest soil displays a network structure with loops between and within individual pools. These results question the present paradigms describing food webs as hierarchical structures with unidirectional flow of C and assumptions about the dominance of fungi in the decomposition of complex organic matter.

  13. System for thermochemical hydrogen production

    DOEpatents

    Werner, R.W.; Galloway, T.R.; Krikorian, O.H.

    1981-05-22

    Method and apparatus are described for joule boosting a SO/sub 3/ decomposer using electrical instead of thermal energy to heat the reactants of the high temperature SO/sub 3/ decomposition step of a thermochemical hydrogen production process driven by a tandem mirror reactor. Joule boosting the decomposer to a sufficiently high temperature from a lower temperature heat source eliminates the need for expensive catalysts and reduces the temperature and consequent materials requirements for the reactor blanket. A particular decomposer design utilizes electrically heated silicon carbide rods, at a temperature of 1250/sup 0/K, to decompose a cross flow of SO/sub 3/ gas.

  14. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  15. Is it worth hyperaccumulating Ni on non-serpentine soils? Decomposition dynamics of mixed-species litters containing hyperaccumulated Ni across serpentine and non-serpentine environments.

    PubMed

    Adamidis, George C; Kazakou, Elena; Aloupi, Maria; Dimitrakopoulos, Panayiotis G

    2016-06-01

    Nickel (Ni)-hyperaccumulating species produce high-Ni litters and may potentially influence important ecosystem processes such as decomposition. Although litters resembling the natural community conditions are essential in order to predict decomposition dynamics, decomposition of mixed-species litters containing hyperaccumulated Ni has never been studied. This study aims to test the effect of different litter mixtures containing hyperaccumulated Ni on decomposition and Ni release across serpentine and non-serpentine soils. Three different litter mixtures were prepared based on the relative abundance of the dominant species in three serpentine soils in the island of Lesbos, Greece where the Ni-hyperaccumulator Alyssum lesbiacum is present. Each litter mixture decomposed on its original serpentine habitat and on an adjacent non-serpentine habitat, in order to investigate whether the decomposition rates differ across the contrasted soils. In order to make comparisons across litter mixtures and to investigate whether additive or non-additive patterns of mass loss occur, a control non-serpentine site was used. Mass loss and Ni release were measured after 90, 180 and 270 d of field exposure. The decomposition rates and Ni release had higher values on serpentine soils after all periods of field exposure. The recorded rapid release of hyperaccumulated Ni is positively related to the initial litter Ni concentration. No differences were found in the decomposition of the three different litter mixtures at the control non-serpentine site, while their patterns of mass loss were additive. Our results: (1) demonstrate the rapid decomposition of litters containing hyperaccumulated Ni on serpentine soils, indicating the presence of metal-tolerant decomposers; and (2) imply the selective decomposition of low-Ni parts of litters by the decomposers on non-serpentine soils. This study provides support for the elemental allelopathy hypothesis of hyperaccumulation, presenting the potential selective advantages acquired by metal-hyperaccumulating plants through litter decomposition on serpentine soils. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. From nonlinear Schrödinger hierarchy to some (2+1)-dimensional nonlinear pseudodifferential equations

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Du, Dianlou

    2010-08-01

    The Poisson structure on CN×RN is introduced to give the Hamiltonian system associated with a spectral problem which yields the nonlinear Schrödinger (NLS) hierarchy. The Hamiltonian system is proven to be Liouville integrable. Some (2+1)-dimensional equations including NLS equation, Kadomtesev-Petviashvili I (KPI) equation, coupled KPI equation, and modified Kadomtesev-Petviashvili (mKP) equation, are decomposed into Hamilton flows via the NLS hierarchy. The algebraic curve, Abel-Jacobi coordinates, and Riemann-Jacobi inversion are used to obtain the algebrogeometric solutions of these equations.

  17. Earth observations taken by the Expedition Seven crew

    NASA Image and Video Library

    2003-10-11

    ISS007-E-17038 (11 October 2003) --- This view featuring a close-up of the Salton Sea was taken by an Expedition 7 crewmember onboard the International Space Station (ISS). The image provides detail of the structure of the algal bloom. These blooms continue to be a problem for the Salton Sea. They are caused by high concentrations of nutrients, especially nitrogen and phosphorous, which drain into the basin from the agricultural run-off. As the algae die and decompose, oxygen levels in the sea drop, causing fish kills and hazardous condition for other wildlife.

  18. Crossing symmetry in alpha space

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; van Rees, Balt C.

    2017-11-01

    We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.

  19. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  20. Process for converting magnesium fluoride to calcium fluoride

    DOEpatents

    Kreuzmann, A.B.; Palmer, D.A.

    1984-12-21

    This invention is a process for the conversion of magnesium fluoride to calcium fluoride whereby magnesium fluoride is decomposed by heating in the presence of calcium carbonate, calcium oxide or calcium hydroxide. Magnesium fluoride is a by-product of the reduction of uranium tetrafluoride to form uranium metal and has no known commercial use, thus its production creates a significant storage problem. The advantage of this invention is that the quality of calcium fluoride produced is sufficient to be used in the industrial manufacture of anhydrous hydrogen fluoride, steel mill flux or ceramic applications.

  1. Do Nonnative Language Speakers "Chew the Fat" and "Spill the Beans" with Different Brain Hemispheres? Investigating Idiom Decomposability with the Divided Visual Field Paradigm

    ERIC Educational Resources Information Center

    Cieslicka, Anna B.

    2013-01-01

    The purpose of this study was to explore possible cerebral asymmetries in the processing of decomposable and nondecomposable idioms by fluent nonnative speakers of English. In the study, native language (Polish) and foreign language (English) decomposable and nondecomposable idioms were embedded in ambiguous (neutral) and unambiguous (biasing…

  2. Advantages and limitations for users of double pit pour-flush latrines: a qualitative study in rural Bangladesh.

    PubMed

    Hussain, Faruqe; Clasen, Thomas; Akter, Shahinoor; Bawel, Victoria; Luby, Stephen P; Leontsini, Elli; Unicomb, Leanne; Barua, Milan Kanti; Thomas, Brittany; Winch, Peter J

    2017-05-25

    In rural Bangladesh, India and elsewhere, pour-flush pit latrines are the most common sanitation system. When a single pit latrine becomes full, users must empty it themselves and risk exposure to fresh feces, pay an emptying service to remove pit contents or build a new latrine. Double pit pour-flush latrines may serve as a long-term sanitation option including high water table areas because the pits do not need to be emptied immediately and the excreta decomposes into reusable soil. Double pit pour-flush latrines were implemented in rural Bangladesh for 'hardcore poor' households by a national NGO, BRAC. We conducted interviews, focus groups, and spot checks in two low-income, rural areas of Bangladesh to explore the advantages and limitations of using double pit latrines compared to single pit latrines. The rural households accepted the double pit pour-flush latrine model and considered it feasible to use and maintain. This latrine design increased accessibility of a sanitation facility for these low-income residents and provided privacy, convenience and comfort, compared to open defecation. Although a double pit latrine is more costly and requires more space than a single pit latrine the households perceived this sanitation system to save resources, because households did not need to hire service workers to empty pits or remove decomposed contents themselves. In addition, the excreta decomposition process produced a reusable soil product that some households used in homestead gardening. The durability of the latrine superstructures was a problem, as most of the bamboo-pole superstructure broke after 6-18 months of use. Double pit pour-flush latrines are a long-term improved sanitation option that offers users several important advantages over single pit pour-flush latrines like in rural Bangladesh which can also be used in areas with high water table. Further research can provide an understanding of the comparative health impacts and effectiveness of the model in preventing human excreta from entering the environment.

  3. Decomposing potassium peroxychromate produces hydroxyl radical (.OH) that can peroxidize the unsaturated fatty acids of phospholipid dispersions.

    PubMed

    Edwards, J C; Quinn, P J

    1982-09-01

    The unsaturated fatty acyl residues of egg yolk lecithin are selectively removed when bilayer dispersions of the lipid are exposed to decomposing peroxychromate at pH 7.6 or pH 9.0. Mannitol (50 mM or 100 mM)partially prevents the oxidation of the phospholipid due to decomposing peroxychromate at pH 7.6 and the amount of lipid lost is inversely proportional to the concentration of mannitol. N,N-Dimethyl-p-nitrosoaniline, mixed with the lipid in a molar ratio of 1.3:1, completely prevents the oxidation of lipid due to decomposing peroxychromate at pH 9.0, but some linoleic acid is lost if the incubation is done at pH 7.6. If the concentration of this quench reagent is reduced tenfold, oxidation of linoleic acid by decomposing peroxychromate at pH 9.0 is observed. Hydrogen peroxide is capable of oxidizing the unsaturated fatty acids of lecithin dispersions. Catalase or boiled catalase (2 mg/ml) protects the lipid from oxidation due to decomposing peroxychromate at pH 7.6 to approximately the same extent, but their protective effect is believed to be due to the non-specific removal of .OH. It is concluded that .OH is the species responsible for the lipid oxidation caused by decomposing peroxychromate. This is consistent with the observed bleaching of N,N-dimethyl-p-nitrosoanaline and the formation of a characteristic paramagnetic .OH adduct of the spin trap, 5,5-dimethylpyrroline-1-oxide.

  4. Integrating microbial physiology and enzyme traits in the quality model

    NASA Astrophysics Data System (ADS)

    Sainte-Marie, Julien; Barrandon, Matthieu; Martin, Francis; Saint-André, Laurent; Derrien, Delphine

    2017-04-01

    Microbe activity plays an undisputable role in soil carbon storage and there have been many calls to integrate microbial ecology in soil carbon (C) models. With regard to this challenge, a few trait-based microbial models of C dynamics have emerged during the past decade. They parameterize specific traits related to decomposer physiology (substrate use efficiency, growth and mortality rates...) and enzyme properties (enzyme production rate, catalytic properties of enzymes…). But these models are built on the premise that organic matter (OM) can be represented as one single entity or are divided into a few pools, while organic matter exists as a continuum of many different compounds spanning from intact plant molecules to highly oxidised microbial metabolites. In addition, a given molecule may also exist in different forms, depending on its stage of polymerization or on its interactions with other organic compounds or mineral phases of the soil. Here we develop a general theoretical model relating the evolution of soil organic matter, as a continuum of progressively decomposing compounds, with decomposer activity and enzyme traits. The model is based on the notion of quality developed by Agren and Bosatta (1998), which is a measure of molecule accessibility to degradation. The model integrates three major processes: OM depolymerisation by enzyme action, OM assimilation and OM biotransformation. For any enzyme, the model reports the quality range where this enzyme selectively operates and how the initial quality distribution of the OM subset evolves into another distribution of qualities under the enzyme action. The model also defines the quality range where the OM can be uptaken and assimilated by microbes. It finally describes how the quality of the assimilated molecules is transformed into another quality distribution, corresponding to the decomposer metabolites signature. Upon decomposer death, these metabolites return to the substrate. We explore here the how microbial physiology and enzyme traits can be incorporated in a model based on a continuous representation of the organic matter and evaluate how it can improve our ability to predict soil C cycling. To do so, we analyse the properties of the model by implementing different scenarii and test the sensitivity of its parameters. Agren, G. I., & Bosatta, E. (1998). Theoretical ecosystem ecology: understanding element cycles. Cambridge University Press.

  5. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  6. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-01-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  7. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-08-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  8. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  9. Let's Break it Down: A Study of Organic Decomposition Rates in Clay Soil

    NASA Astrophysics Data System (ADS)

    Weiss, E.

    2016-12-01

    In this experiment I will be testing if temperature affects the organic decomposition rates in clay soil. I will need to be able to clean and weigh each filter paper without disrupting my data damaging or brushing off additional paper material. From there I need to be able to analyze and interpret my data to factor anything else that may affect the decomposition rates in the soil. Soil decomposers include bacteria and fungi. They obtain energy from plant and animal detritus through aerobic decomposition, which is similar to how humans break down sugar. The formula is: C6H12O6 + O2 → CO2 + H2O + energy. Besides oxygen and sugar the organisms need nutrients such as water and sustainable temperatures. Decomposition is important to us because it helps regulate soil structure, moisture, temperature, and provides nutrients to soil organisms. This matters on a global scale since decomposers release a large amount of carbon when breaking down matter, which contributes to greenhouse gasses such as carbon dioxide and methane. These greenhouse gasses affect the earth's climate. People who care about decomposition are farmers and those in agriculture, as well as environmental scientists. Even national parks might care because decomposition may affect park safety, how the park looks, and the amount of plants and wildlife. Things that can affect decomposition are the decomposers in the soil, temperature, and water or moisture. My secondary research also showed that PH and chemical composition of the soil affect the rate of decomposition.Cold or freezing temperatures can help preserve organic material in soil because it freezes the soil and moisture, making it too dense for the organic decomposers to break down the organic matter. Soil also can be preserved by drying out and being stored at 4º Celsius (or 39º Fahrenheit) for 28 days. However, soil can degrade slowly in these conditions because it is not frozen and can be oxidized.

  10. Enhanced summer warming reduces fungal decomposer diversity and litter mass loss more strongly in dry than in wet tundra.

    PubMed

    Christiansen, Casper T; Haugwitz, Merian S; Priemé, Anders; Nielsen, Cecilie S; Elberling, Bo; Michelsen, Anders; Grogan, Paul; Blok, Daan

    2017-01-01

    Many Arctic regions are currently experiencing substantial summer and winter climate changes. Litter decomposition is a fundamental component of ecosystem carbon and nutrient cycles, with fungi being among the primary decomposers. To assess the impacts of seasonal climatic changes on litter fungal communities and their functioning, Betula glandulosa leaf litter was surface-incubated in two adjacent low Arctic sites with contrasting soil moisture regimes: dry shrub heath and wet sedge tundra at Disko Island, Greenland. At both sites, we investigated the impacts of factorial combinations of enhanced summer warming (using open-top chambers; OTCs) and deepened snow (using snow fences) on surface litter mass loss, chemistry and fungal decomposer communities after approximately 1 year. Enhanced summer warming significantly restricted litter mass loss by 32% in the dry and 17% in the wet site. Litter moisture content was significantly reduced by summer warming in the dry, but not in the wet site. Likewise, fungal total abundance and diversity were reduced by OTC warming at the dry site, while comparatively modest warming effects were observed in the wet site. These results suggest that increased evapotranspiration in the OTC plots lowered litter moisture content to the point where fungal decomposition activities became inhibited. In contrast, snow addition enhanced fungal abundance in both sites but did not significantly affect litter mass loss rates. Across sites, control plots only shared 15% of their fungal phylotypes, suggesting strong local controls on fungal decomposer community composition. Nevertheless, fungal community functioning (litter decomposition) was negatively affected by warming in both sites. We conclude that although buried soil organic matter decomposition is widely expected to increase with future summer warming, surface litter decay and nutrient turnover rates in both xeric and relatively moist tundra are likely to be significantly restricted by the evaporative drying associated with warmer air temperatures. © 2016 John Wiley & Sons Ltd.

  11. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    PubMed

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  13. The Chemical Decomposition of 5-aza-2′-deoxycytidine (Decitabine): Kinetic Analyses and Identification of Products by NMR, HPLC, and Mass Spectrometry

    PubMed Central

    Rogstad, Daniel K.; Herring, Jason L.; Theruvathu, Jacob A.; Burdzy, Artur; Perry, Christopher C.; Neidigh, Jonathan W.; Sowers, Lawrence C.

    2014-01-01

    The nucleoside analog 5-aza-2′-deoxycytidine (Decitabine, DAC) is one of several drugs in clinical use that inhibit DNA methyltransferases, leading to a decrease of 5-methylcytosine in newly replicated DNA and subsequent transcriptional activation of genes silenced by cytosine methylation. In addition to methyltransferase inhibition, DAC has demonstrated toxicity and potential mutagenicity, and can induce a DNA-repair response. The mechanisms accounting for these events are not well understood. DAC is chemically unstable in aqueous solutions, but there is little consensus between previous reports as to its half-life and corresponding products of decomposition at physiological temperature and pH, potentially confounding studies on its mechanism of action and long-term use in humans. Here we have employed a battery of analytical methods to estimate kinetic rates and to characterize DAC decomposition products under conditions of physiological temperature and pH. Our results indicate that DAC decomposes into a plethora of products, formed by hydrolytic opening and deformylation of the triazine ring, in addition to anomerization and possibly other changes in the sugar ring structure. We also discuss the advantages and problems associated with each analytical method used. The results reported here will facilitate ongoing studies and clinical trials aimed at understanding the mechanisms of action, toxicity, and possible mutagenicity of DAC and related analogs. PMID:19480391

  14. FormTracer. A mathematica tracing package using FORM

    NASA Astrophysics Data System (ADS)

    Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils

    2017-10-01

    We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.

  15. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  16. Chemical decomposition of 5-aza-2'-deoxycytidine (Decitabine): kinetic analyses and identification of products by NMR, HPLC, and mass spectrometry.

    PubMed

    Rogstad, Daniel K; Herring, Jason L; Theruvathu, Jacob A; Burdzy, Artur; Perry, Christopher C; Neidigh, Jonathan W; Sowers, Lawrence C

    2009-06-01

    The nucleoside analogue 5-aza-2'-deoxycytidine (Decitabine, DAC) is one of several drugs in clinical use that inhibit DNA methyltransferases, leading to a decrease of 5-methylcytosine in newly replicated DNA and subsequent transcriptional activation of genes silenced by cytosine methylation. In addition to methyltransferase inhibition, DAC has demonstrated toxicity and potential mutagenicity, and can induce a DNA-repair response. The mechanisms accounting for these events are not well understood. DAC is chemically unstable in aqueous solutions, but there is little consensus between previous reports as to its half-life and corresponding products of decomposition at physiological temperature and pH, potentially confounding studies on its mechanism of action and long-term use in humans. Here, we have employed a battery of analytical methods to estimate kinetic rates and to characterize DAC decomposition products under conditions of physiological temperature and pH. Our results indicate that DAC decomposes into a plethora of products, formed by hydrolytic opening and deformylation of the triazine ring, in addition to anomerization and possibly other changes in the sugar ring structure. We also discuss the advantages and problems associated with each analytical method used. The results reported here will facilitate ongoing studies and clinical trials aimed at understanding the mechanisms of action, toxicity, and possible mutagenicity of DAC and related analogues.

  17. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values

    NASA Astrophysics Data System (ADS)

    Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun

    2017-03-01

    To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.

  18. Detection of concealed mercury with thermal neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Z.W.

    1994-08-18

    In the United States today, governments at all levels and the citizenry are paying increasing attention to the effects, both real and hypothetical, of industrial activity on the environment. Responsible modem industries, reflecting this heightened public and regulatory awareness, are either substituting benign materials for hazardous ones, or using hazardous materials only under carefully controlled conditions. In addition, present-day environmental consciousness dictates that we deal responsibly with legacy wastes. The decontamination and decommissioning (D&D) of facilities at which mercury was used or processed presents a variety of challenges. Elemental mercury is a liquid at room temperature and readily evaporates inmore » air. In large mercury-laden buildings, droplets may evaporate from one area only to recondense in other cooler areas. The rate of evaporation is a function of humidity and temperature; consequently, different parts of a building may be sources or sinks of mercury at different times of the day or even the year. Additionally, although mercury oxidizes in air, the oxides decompose upon heating. Hence, oxides contained within pipes or equipment, may be decomposed when those pipes and equipment are cut with saws or torches. Furthermore, mercury seeps through the pores and cracks in concrete blocks and pads, and collects as puddles and blobs in void spaces within and under them.« less

  19. Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles

    NASA Astrophysics Data System (ADS)

    Ciambur, Bogdan C.

    2016-12-01

    I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.

  20. A bio-anodic filter facilitated entrapment, decomposition and in situ oxidation of algal biomass in wastewater effluent.

    PubMed

    Mohammadi Khalfbadam, Hassan; Cheng, Ka Yu; Sarukkalige, Ranjan; Kaksonen, Anna H; Kayaalp, Ahmet S; Ginige, Maneesha P

    2016-09-01

    This study examined for the first time the use of bioelectrochemical systems (BES) to entrap, decompose and oxidise fresh algal biomass from an algae-laden effluent. The experimental process consisted of a photobioreactor for a continuous production of the algal-laden effluent, and a two-chamber BES equipped with anodic graphite granules and carbon-felt to physically remove and oxidise algal biomass from the influent. Results showed that the BES filter could retain ca. 90% of the suspended solids (SS) loaded. A coulombic efficiency (CE) of 36.6% (based on particulate chemical oxygen demand (PCOD) removed) was achieved, which was consistent with the highest CEs of BES studies (operated in microbial fuel cell mode (MFC)) that included additional pre-treatment steps for algae hydrolysis. Overall, this study suggests that a filter type BES anode can effectively entrap, decompose and in situ oxidise algae without the need for a separate pre-treatment step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A phylogenetic transform enhances analysis of compositional microbiota data

    PubMed Central

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-01-01

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697

  2. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  3. Stabilization of source-separated human urine by chemical oxidation.

    PubMed

    Zhang, Yang; Li, Zifu; Zhao, Yuan; Chen, Shuangling; Mahmood, Ibrahim Babatunde

    2013-01-01

    The inhibitory effect of ozone and hydrogen peroxide (HP) on urea hydrolysis in stored urine was investigated and compared. Ozone showed less effect on urea hydrolysis due to the complicated composition of urine (including a large amount of urease-producing bacteria) and bacteria regeneration. Ozone concentration and total heterotrophic bacteria analysis demonstrated that residual ozone concentration decreased by 43% within 15 hr from 13.50 to 7.72 mg/L in the one-time ozonation urine test, and finally completely decomposed within 4 days. In addition, bacteria regenerated quickly after ozone completely decomposed. However, HP showed a significant effect on inhibiting urea hydrolysis not only in stored urine but also in fecal-contaminated urine. The suitable doses of applied HP to inhibit urea hydrolysis in stored urine, concentrations of 0.5 and 1.0 g feces per liter of fecal-contaminated urine, were 0.03, 0.16 and 0.23 mol/L, respectively. The urea concentrations after 2 months stored were 7,145, 7,109 and 7,234 mg/L, respectively.

  4. A Novel approach for predicting monthly water demand by combining singular spectrum analysis with neural networks

    NASA Astrophysics Data System (ADS)

    Zubaidi, Salah L.; Dooley, Jayne; Alkhaddar, Rafid M.; Abdellatif, Mawada; Al-Bugharbee, Hussein; Ortega-Martorell, Sandra

    2018-06-01

    Valid and dependable water demand prediction is a major element of the effective and sustainable expansion of municipal water infrastructures. This study provides a novel approach to quantifying water demand through the assessment of climatic factors, using a combination of a pretreatment signal technique, a hybrid particle swarm optimisation algorithm and an artificial neural network (PSO-ANN). The Singular Spectrum Analysis (SSA) technique was adopted to decompose and reconstruct water consumption in relation to six weather variables, to create a seasonal and stochastic time series. The results revealed that SSA is a powerful technique, capable of decomposing the original time series into many independent components including trend, oscillatory behaviours and noise. In addition, the PSO-ANN algorithm was shown to be a reliable prediction model, outperforming the hybrid Backtracking Search Algorithm BSA-ANN in terms of fitness function (RMSE). The findings of this study also support the view that water demand is driven by climatological variables.

  5. Acute toxicity of live and decomposing green alga Ulva ( Enteromorpha) prolifera to abalone Haliotis discus hannai

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yu, Rencheng; Zhou, Mingjiang

    2011-05-01

    From 2007 to 2009, large-scale blooms of green algae (the so-called "green tides") occurred every summer in the Yellow Sea, China. In June 2008, huge amounts of floating green algae accumulated along the coast of Qingdao and led to mass mortality of cultured abalone and sea cucumber. However, the mechanism for the mass mortality of cultured animals remains undetermined. This study examined the toxic effects of Ulva ( Enteromorpha) prolifera, the causative species of green tides in the Yellow Sea during the last three years. The acute toxicity of fresh culture medium and decomposing algal effluent of U. prolifera to the cultured abalone Haliotis discus hannai were tested. It was found that both fresh culture medium and decomposing algal effluent had toxic effects to abalone, and decomposing algal effluent was more toxic than fresh culture medium. The acute toxicity of decomposing algal effluent could be attributed to the ammonia and sulfide presented in the effluent, as well as the hypoxia caused by the decomposition process.

  6. Plant–herbivore–decomposer stoichiometric mismatches and nutrient cycling in ecosystems

    PubMed Central

    Cherif, Mehdi; Loreau, Michel

    2013-01-01

    Plant stoichiometry is thought to have a major influence on how herbivores affect nutrient availability in ecosystems. Most conceptual models predict that plants with high nutrient contents increase nutrient excretion by herbivores, in turn raising nutrient availability. To test this hypothesis, we built a stoichiometrically explicit model that includes a simple but thorough description of the processes of herbivory and decomposition. Our results challenge traditional views of herbivore impacts on nutrient availability in many ways. They show that the relationship between plant nutrient content and the impact of herbivores predicted by conceptual models holds only at high plant nutrient contents. At low plant nutrient contents, the impact of herbivores is mediated by the mineralization/immobilization of nutrients by decomposers and by the type of resource limiting the growth of decomposers. Both parameters are functions of the mismatch between plant and decomposer stoichiometries. Our work provides new predictions about the impacts of herbivores on ecosystem fertility that depend on critical interactions between plant, herbivore and decomposer stoichiometries in ecosystems. PMID:23303537

  7. Gas Sensitivity and Sensing Mechanism Studies on Au-Doped TiO2 Nanotube Arrays for Detecting SF6 Decomposed Components

    PubMed Central

    Zhang, Xiaoxing; Yu, Lei; Tie, Jing; Dong, Xingchen

    2014-01-01

    The analysis to SF6 decomposed component gases is an efficient diagnostic approach to detect the partial discharge in gas-insulated switchgear (GIS) for the purpose of accessing the operating state of power equipment. This paper applied the Au-doped TiO2 nanotube array sensor (Au-TiO2 NTAs) to detect SF6 decomposed components. The electrochemical constant potential method was adopted in the Au-TiO2 NTAs' fabrication, and a series of experiments were conducted to test the characteristic SF6 decomposed gases for a thorough investigation of sensing performances. The sensing characteristic curves of intrinsic and Au-doped TiO2 NTAs were compared to study the mechanism of the gas sensing response. The results indicated that the doped Au could change the TiO2 nanotube arrays' performances of gas sensing selectivity in SF6 decomposed components, as well as reducing the working temperature of TiO2 NTAs. PMID:25330053

  8. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  9. Coordinated platooning with multiple speeds

    DOE PAGES

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    2018-03-22

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  10. Coordinated platooning with multiple speeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  11. Vertebrate Decomposition Is Accelerated by Soil Microbes

    PubMed Central

    Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.

    2014-01-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  12. Into the decomposed body-forensic digital autopsy using multislice-computed tomography.

    PubMed

    Thali, M J; Yen, K; Schweitzer, W; Vock, P; Ozdoba, C; Dirnhofer, R

    2003-07-08

    It is impossible to obtain a representative anatomical documentation of an entire body using classical X-ray methods, they subsume three-dimensional bodies into a two-dimensional level. We used the novel multislice-computed tomography (MSCT) technique in order to evaluate a case of homicide with putrefaction of the corpse before performing a classical forensic autopsy. This non-invasive method showed gaseous distension of the decomposing organs and tissues in detail as well as a complex fracture of the calvarium. MSCT also proved useful in screening for foreign matter in decomposing bodies, and full-body scanning took only a few minutes. In conclusion, we believe postmortem MSCT imaging is an excellent vizualisation tool with great potential for forensic documentation and evaluation of decomposed bodies.

  13. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  14. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less

  15. Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.

    PubMed

    Hansen, Sofie Therese; Hansen, Lars Kai

    2017-03-01

    Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  17. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association

    PubMed Central

    Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-01-01

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085

  18. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.

    PubMed

    Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-11-05

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.

  19. Microwave Absorption Characteristics of Tire

    NASA Astrophysics Data System (ADS)

    Zhang, Yuzhe; Hwang, Jiann-Yang; Peng, Zhiwei; Andriese, Matthew; Li, Bowen; Huang, Xiaodi; Wang, Xinli

    The recycling of waste tires has been a big environmental problem. About 280 million waste tires are produced annually in the United States and more than 2 billion tires are stockpiled, which cause fire hazards and health issues. Tire rubbers are insoluble elastic high polymer materials. They are not biodegradable and may take hundreds of years to decompose in the natural environment. Microwave irradiation can be a thermal processing method for the decomposition of tire rubbers. In this study, the microwave absorption properties of waste tire at various temperatures are characterized to determine the conditions favorable for the microwave heating of waste tires.

  20. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  1. Self-reduction of a copper complex MOD ink for inkjet printing conductive patterns on plastics.

    PubMed

    Farraj, Yousef; Grouchko, Michael; Magdassi, Shlomo

    2015-01-31

    Highly conductive copper patterns on low-cost flexible substrates are obtained by inkjet printing a metal complex based ink. Upon heating the ink, the soluble complex, which is composed of copper formate and 2-amino-2-methyl-1-propanol, decomposes under nitrogen at 140 °C and is converted to pure metallic copper. The decomposition process of the complex is investigated and a suggested mechanism is presented. The ink is stable in air for prolonged periods, with no sedimentation or oxidation problems, which are usually encountered in copper nanoparticle based inks.

  2. Verification of RDX Photolysis Mechanism

    DTIC Science & Technology

    1999-11-01

    which re-addition of HN02 was proposed to yield a hydroxydiazo intermediate that then decomposed to an alcohol . This sequence is shown for...various organic products such as alcohols , or undergo carbon- nitrogen (C-N) bond cleavage (Noller 1965). This reaction is sufficiently quanti...carbon-centered functional group such as the alcohol shown below, or C-N bond cleavage. 42 CERL TR 99/93 N02 N02 No2 ^Nv. N ’ ( ^| H2

  3. The Use of Wetting Agents/Fume Suppressants for Minimizing the Atmospheric Emissions from Hard Chromium Electroplating Baths

    DTIC Science & Technology

    2004-03-01

    oxidized rapidly producing trivalent chromium and insoluble organic compounds that eventually decomposed to carbon dioxide. This behavior required...frequent or continuous WA/FS additions, making them a more temporary than permanent solution. The trivalent chromium was also a bath contaminant requiring...need for hard chromium electroplating, but is not expected to ever be able to eliminate it. • Trivalent Chromium Electroplating: Chromium can be

  4. SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis Smith; James Knudsen

    As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less

  5. Joint terminals and relay optimization for two-way power line information exchange systems with QoS constraints

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolin; Rong, Yue

    2015-12-01

    The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.

  6. A design for an intelligent monitor and controller for space station electrical power using parallel distributed problem solving

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1990-01-01

    The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.

  7. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  8. Influence of neurobehavioral incentive valence and magnitude on alcohol drinking behavior

    PubMed Central

    Joseph, Jane E.; Zhu, Xun; Corbly, Christine R.; DeSantis, Stacia; Lee, Dustin C.; Baik, Grace; Kiser, Seth; Jiang, Yang; Lynam, Donald R.; Kelly, Thomas H.

    2014-01-01

    The monetary incentive delay (MID) task is a widely used probe for isolating neural circuitry in the human brain associated with incentive motivation. In the present functional magnetic resonance imaging (fMRI) study, 82 young adults, characterized along dimensions of impulsive sensation seeking, completed a MID task. fMRI and behavioral incentive functions were decomposed into incentive valence and magnitude parameters, which were used as predictors in linear regression to determine whether mesolimbic response is associated with problem drinking and recent alcohol use. Alcohol use was best explained by higher fMRI response to anticipation of losses and feedback on high gains in the thalamus. In contrast, problem drinking was best explained by reduced sensitivity to large incentive values in meso-limbic regions in the anticipation phase and increased sensitivity to small incentive values in the dorsal caudate nucleus in the feedback phase. Altered fMRI responses to monetary incentives in mesolimbic circuitry, particularly those alterations associated with problem drinking, may serve as potential early indicators of substance abuse trajectories. PMID:25261001

  9. Reconstruction of hyperspectral image using matting model for classification

    NASA Astrophysics Data System (ADS)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  10. Rebuilding DEMATEL threshold value: an example of a food and beverage information system.

    PubMed

    Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin

    2016-01-01

    This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.

  11. Comparing and decomposing differences in preventive and hospital care: USA versus Taiwan.

    PubMed

    Hsiou, Tiffany R; Pylypchuk, Yuriy

    2012-07-01

    As the USA expands health insurance coverage, comparing utilization of healthcare services with countries like Taiwan that already have universal coverage can highlight problematic areas of each system. The universal coverage plan of Taiwan is the newest among developed countries, and it is known for readily providing access to care at low costs. However, Taiwan experiences problems on the supply side, such as inadequate compensation for providers, especially in the area of preventive care. We compare the use of preventive, hospital, and emergency care between the USA and Taiwan. The rate of preventive care use is much higher in the USA than in Taiwan, whereas the use of hospital and emergency care is about the same. Results of our decomposition analysis suggest that higher levels of education and income, along with inferior health status in the USA, are significant factors, each explaining between 7% and 15% of the gap in preventive care use. Our analysis suggests that, in addition to universal coverage, proper remuneration schemes, education levels, and cultural attitudes towards health care are important factors that influence the use of preventive care. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Tracing Acetylene Dissolved in Transformer Oil by Tunable Diode Laser Absorption Spectrum.

    PubMed

    Ma, Guo-Ming; Zhao, Shu-Jing; Jiang, Jun; Song, Hong-Tu; Li, Cheng-Rong; Luo, Ying-Ting; Wu, Hao

    2017-11-02

    Dissolved gas analysis (DGA) is widely used in monitoring and diagnosing of power transformer, since the insulation material in the power transformer decomposes gases under abnormal operation condition. Among the gases, acetylene, as a symbol of low energy spark discharge and high energy electrical faults (arc discharge) of power transformer, is an important monitoring parameter. The current gas detection method used by the online DGA equipment suffers from problems such as cross sensitivity, electromagnetic compatibility and reliability. In this paper, an optical gas detection system based on TDLAS technology is proposed to detect acetylene dissolved in transformer oil. We selected a 1530.370 nm laser in the near infrared wavelength range to correspond to the absorption peak of acetylene, while using the wavelength modulation strategy and Herriott cell to improve the detection precision. Results show that the limit of detection reaches 0.49 ppm. The detection system responds quickly to changes of gas concentration and is easily to maintenance while has no electromagnetic interference, cross-sensitivity, or carrier gas. In addition, a complete detection process of the system takes only 8 minutes, implying a practical prospect of online monitoring technology.

  13. Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.

    PubMed

    Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian

    2017-11-08

    It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.

  14. Mechanistic and Kinetic Analysis of Na2SO4-Modified Laterite Decomposition by Thermogravimetry Coupled with Mass Spectrometry

    PubMed Central

    Yang, Song; Du, Wenguang; Shi, Pengzheng; Shangguan, Ju; Liu, Shoujun; Zhou, Changhai; Chen, Peng; Zhang, Qian; Fan, Huiling

    2016-01-01

    Nickel laterites cannot be effectively used in physical methods because of their poor crystallinity and fine grain size. Na2SO4 is the most efficient additive for grade enrichment and Ni recovery. However, how Na2SO4 affects the selective reduction of laterite ores has not been clearly investigated. This study investigated the decomposition of laterite with and without the addition of Na2SO4 in an argon atmosphere using thermogravimetry coupled with mass spectrometry (TG-MS). Approximately 25 mg of samples with 20 wt% Na2SO4 was pyrolyzed under a 100 ml/min Ar flow at a heating rate of 10°C/min from room temperature to 1300°C. The kinetic study was based on derivative thermogravimetric (DTG) curves. The evolution of the pyrolysis gas composition was detected by mass spectrometry, and the decomposition products were analyzed by X-ray diffraction (XRD). The decomposition behavior of laterite with the addition of Na2SO4 was similar to that of pure laterite below 800°C during the first three stages. However, in the fourth stage, the dolomite decomposed at 897°C, which is approximately 200°C lower than the decomposition of pure laterite. In the last stage, the laterite decomposed and emitted SO2 in the presence of Na2SO4 with an activation energy of 91.37 kJ/mol. The decomposition of laterite with and without the addition of Na2SO4 can be described by one first-order reaction. Moreover, the use of Na2SO4 as the modification agent can reduce the activation energy of laterite decomposition; thus, the reaction rate can be accelerated, and the reaction temperature can be markedly reduced. PMID:27333072

  15. Decomposing Additive Genetic Variance Revealed Novel Insights into Trait Evolution in Synthetic Hexaploid Wheat.

    PubMed

    Jighly, Abdulqader; Joukhadar, Reem; Singh, Sukhwinder; Ogbonnaya, Francis C

    2018-01-01

    Whole genome duplication (WGD) is an evolutionary phenomenon, which causes significant changes to genomic structure and trait architecture. In recent years, a number of studies decomposed the additive genetic variance explained by different sets of variants. However, they investigated diploid populations only and none of the studies examined any polyploid organism. In this research, we extended the application of this approach to polyploids, to differentiate the additive variance explained by the three subgenomes and seven sets of homoeologous chromosomes in synthetic allohexaploid wheat (SHW) to gain a better understanding of trait evolution after WGD. Our SHW population was generated by crossing improved durum parents ( Triticum turgidum; 2n = 4x = 28, AABB subgenomes) with the progenitor species Aegilops tauschii (syn Ae. squarrosa, T. tauschii ; 2n = 2x = 14, DD subgenome). The population was phenotyped for 10 fungal/nematode resistance traits as well as two abiotic stresses. We showed that the wild D subgenome dominated the additive effect and this dominance affected the A more than the B subgenome. We provide evidence that this dominance was not inflated by population structure, relatedness among individuals or by longer linkage disequilibrium blocks observed in the D subgenome within the population used for this study. The cumulative size of the three homoeologs of the seven chromosomal groups showed a weak but significant positive correlation with their cumulative explained additive variance. Furthermore, an average of 69% for each chromosomal group's cumulative additive variance came from one homoeolog that had the highest explained variance within the group across all 12 traits. We hypothesize that structural and functional changes during diploidization may explain chromosomal group relations as allopolyploids keep balanced dosage for many genes. Our results contribute to a better understanding of trait evolution mechanisms in polyploidy, which will facilitate the effective utilization of wheat wild relatives in breeding.

  16. Decomposing Additive Genetic Variance Revealed Novel Insights into Trait Evolution in Synthetic Hexaploid Wheat

    PubMed Central

    Jighly, Abdulqader; Joukhadar, Reem; Singh, Sukhwinder; Ogbonnaya, Francis C.

    2018-01-01

    Whole genome duplication (WGD) is an evolutionary phenomenon, which causes significant changes to genomic structure and trait architecture. In recent years, a number of studies decomposed the additive genetic variance explained by different sets of variants. However, they investigated diploid populations only and none of the studies examined any polyploid organism. In this research, we extended the application of this approach to polyploids, to differentiate the additive variance explained by the three subgenomes and seven sets of homoeologous chromosomes in synthetic allohexaploid wheat (SHW) to gain a better understanding of trait evolution after WGD. Our SHW population was generated by crossing improved durum parents (Triticum turgidum; 2n = 4x = 28, AABB subgenomes) with the progenitor species Aegilops tauschii (syn Ae. squarrosa, T. tauschii; 2n = 2x = 14, DD subgenome). The population was phenotyped for 10 fungal/nematode resistance traits as well as two abiotic stresses. We showed that the wild D subgenome dominated the additive effect and this dominance affected the A more than the B subgenome. We provide evidence that this dominance was not inflated by population structure, relatedness among individuals or by longer linkage disequilibrium blocks observed in the D subgenome within the population used for this study. The cumulative size of the three homoeologs of the seven chromosomal groups showed a weak but significant positive correlation with their cumulative explained additive variance. Furthermore, an average of 69% for each chromosomal group's cumulative additive variance came from one homoeolog that had the highest explained variance within the group across all 12 traits. We hypothesize that structural and functional changes during diploidization may explain chromosomal group relations as allopolyploids keep balanced dosage for many genes. Our results contribute to a better understanding of trait evolution mechanisms in polyploidy, which will facilitate the effective utilization of wheat wild relatives in breeding. PMID:29467793

  17. Evaluation of the SSRCT engine with a hydrazine as a fuel, phase 1

    NASA Technical Reports Server (NTRS)

    Minton, S. J.

    1978-01-01

    The performance parameters for the space shuttle reaction control thruster (SSRCT) when the fuel is changed from monomethylhydrazine to hydrazine were predicted. Potential problems are higher chamber wall temperature during steady state operation and explosive events during pulse mode operation. Solutions to the problems are suggested. To conduct the analysis, a more realistic film cooling model was devised which considers that hydrazine based fuels are reactive when used as a film coolant on the walls of the combustion chamber. Hydrazine based fuels can decompose exothermally as a monopropellant and also enter into bipropellant reactions with any excess oxidizer in the combustion chamber. It is concluded that the conversion of the thruster from MMH to hydrazine fuel is feasible but that a number of changes would be required to achieve the same safety margins as the monomethylhydrazine-fueled thruster.

  18. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  19. Stringy Toda cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaloper, N.

    We discuss a particular stringy modular cosmology with two axion fields in seven space-time dimensions, decomposable as a time and two flat three-spaces. The effective equations of motion for the problem are those of the SU(3) Toda molecule and, hence, are integrable. We write down the solutions, and show that all of them are singular. They can be thought of as a generalization of the pre-big-bang cosmology with excited internal degrees of freedom, and still suffering from the graceful exit problem. Some of the solutions, however, show a rather unexpected property: some of their spatial sections shrink to a pointmore » in spite of winding modes wrapped around them. We also comment how more general, anisotropic solutions, with fewer Killing symmetries, can be obtained with the help of STU dualities. {copyright} {ital 1997} {ital The American Physical Society}« less

  20. Wave propagation problem for a micropolar elastic waveguide

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Murashkin, E. V.; Radayev, Y. N.

    2018-04-01

    A propagation problem for coupled harmonic waves of translational displacements and microrotations along the axis of a long cylindrical waveguide is discussed at present study. Microrotations modeling is carried out within the linear micropolar elasticity frameworks. The mathematical model of the linear (or even nonlinear) micropolar elasticity is also expanded to a field theory model by variational least action integral and the least action principle. The governing coupled vector differential equations of the linear micropolar elasticity are given. The translational displacements and microrotations in the harmonic coupled wave are decomposed into potential and vortex parts. Calibrating equations providing simplification of the equations for the wave potentials are proposed. The coupled differential equations are then reduced to uncoupled ones and finally to the Helmholtz wave equations. The wave equations solutions for the translational and microrotational waves potentials are obtained for a high-frequency range.

  1. The influence of body position and microclimate on ketamine and metabolite distribution in decomposed skeletal remains.

    PubMed

    Cornthwaite, H M; Watterson, J H

    2014-10-01

    The influence of body position and microclimate on ketamine (KET) and metabolite distribution in decomposed bone tissue was examined. Rats received 75 mg/kg (i.p.) KET (n = 30) or remained drug-free (controls, n = 4). Following euthanasia, rats were divided into two groups and placed outdoors to decompose in one of the three positions: supine (SUP), prone (PRO) or upright (UPR). One group decomposed in a shaded, wooded microclimate (Site 1) while the other decomposed in an exposed sunlit microclimate with gravel substrate (Site 2), roughly 500 m from Site 1. Following decomposition, bones (lumbar vertebrae, thoracic vertebra, cervical vertebrae, rib, pelvis, femora, tibiae, humeri and scapulae) were collected and sorted for analysis. Clean, ground bones underwent microwave-assisted extraction using acetone : hexane mixture (1 : 1, v/v), followed by solid-phase extraction and analysis using GC-MS. Drug levels, expressed as mass normalized response ratios, were compared across all bone types between body position and microclimates. Bone type was a main effect (P < 0.05) for drug level and drug/metabolite level ratio for all body positions and microclimates examined. Microclimate and body position significantly influenced observed drug levels: higher levels were observed in carcasses decomposing in direct sunlight, where reduced entomological activity led to slowed decomposition. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  3. Toxicity to woodlice of zinc and lead oxides added to soil litter

    USGS Publications Warehouse

    Beyer, W.N.; Anderson, A.

    1985-01-01

    Previous studies have shown that high concentrations of metals in soil are associated with reductions in decomposer populations. We have here determined the relation between the concentrations of lead and zinc added as oxides to soil litter and the survival and reproduction of a decomposer population under controlled conditions. Laboratory populations of woodlice (Porcellio scaber Latr) were fed soil litter treated with lead or zinc at concentrations that ranged from 100 to 12,800 ppm. The survival of the adults, the maximum number of young alive, and the average number of young alive, were recorded over 64 weeks. Lead at 12,800 ppm and zinc at 1,600 ppm or more had statistically significant (p < 0.05) negative effects on the populations. These results agree with field observations suggesting that lead and zinc have reduced populations of decomposers in contaminated forest soil litter, and concentrations are similar to those reported to be associated with reductions in natural populations of decomposers. Poisoning of decomposers may disrupt nutrient cycling, reduce the numbers of invertebrates available to other wildlife for food, and contribute to the contamination of food chains.

  4. The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations

    NASA Astrophysics Data System (ADS)

    Degen, D.; Veroy, K.; Wellmann, F.

    2017-12-01

    Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.

  5. Priming of soil carbon decomposition in two Inner Mongolia grassland soils following sheep dung addition: a study using ¹³C natural abundance approach.

    PubMed

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping; Wang, Yanfen; Wang, Chengjie

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ¹³C = -26.8‰; dung δ¹³C = -26.2‰) or Cleistogenes squarrosa (C₄ plant with δ¹³C = -14.6‰; dung δ¹³C = -15.7‰). Fresh C₃ and C₄ sheep dung was mixed with the two grassland soils and incubated under controlled conditions for analysis of ¹³C-CO₂ emissions. Soil samples were taken at days 17, 43, 86, 127 and 152 after sheep dung addition to detect the δ¹³C signal in soil and dung components. Analysis revealed that 16.9% and 16.6% of the sheep dung C had decomposed, of which 3.5% and 2.8% was sequestrated in the soils of L. chinensis and A. frigida grasslands, respectively, while the remaining decomposed sheep dung was emitted as CO₂. The cumulative amounts of C respired from dung treated soils during 152 days were 7-8 times higher than in the un-amended controls. In both grassland soils, ca. 60% of the evolved CO₂ originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg⁻¹ dry soil had been emitted as CO₂ for the L. chinensis and A. frigida soils, respectively. Hence, the net C losses from L. chinensis and A. frigida soils were 0.6 g and 0.9 g C kg⁻¹ soil, which was 2.6% and 7.0% of the total C in L. chinensis and A. frigida grasslands soils, respectively. Our results suggest that grazing of degraded Inner Mongolian pastures may cause a net soil C loss due to the positive priming effect, thereby accelerating soil deterioration.

  6. Priming of Soil Carbon Decomposition in Two Inner Mongolia Grassland Soils following Sheep Dung Addition: A Study Using 13C Natural Abundance Approach

    PubMed Central

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping; Wang, Yanfen; Wang, Chengjie

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ13C = −26.8‰; dung δ13C = −26.2‰) or Cleistogenes squarrosa (C4 plant with δ13C = −14.6‰; dung δ13C = −15.7‰). Fresh C3 and C4 sheep dung was mixed with the two grassland soils and incubated under controlled conditions for analysis of 13C-CO2 emissions. Soil samples were taken at days 17, 43, 86, 127 and 152 after sheep dung addition to detect the δ13C signal in soil and dung components. Analysis revealed that 16.9% and 16.6% of the sheep dung C had decomposed, of which 3.5% and 2.8% was sequestrated in the soils of L. chinensis and A. frigida grasslands, respectively, while the remaining decomposed sheep dung was emitted as CO2. The cumulative amounts of C respired from dung treated soils during 152 days were 7–8 times higher than in the un-amended controls. In both grassland soils, ca. 60% of the evolved CO2 originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg−1 dry soil had been emitted as CO2 for the L. chinensis and A. frigida soils, respectively. Hence, the net C losses from L. chinensis and A. frigida soils were 0.6 g and 0.9 g C kg−1 soil, which was 2.6% and 7.0% of the total C in L. chinensis and A. frigida grasslands soils, respectively. Our results suggest that grazing of degraded Inner Mongolian pastures may cause a net soil C loss due to the positive priming effect, thereby accelerating soil deterioration. PMID:24236024

  7. An optimality framework to predict decomposer carbon-use efficiency trends along stoichiometric gradients

    NASA Astrophysics Data System (ADS)

    Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.

    2016-12-01

    Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects, although in some subsets of data, temperature and substrate stoichiometry appeared to interact. Based on our results, the optimality principle can provide a solid (but still incomplete) framework to develop CUE models for large-scale applications.

  8. A Sharp methodology for VLSI layout

    NASA Astrophysics Data System (ADS)

    Bapat, Shekhar

    1993-01-01

    The layout problem for VLSI circuits is recognized as a very difficult problem and has been traditionally decomposed into the several seemingly independent sub-problems of placement, global routing, and detailed routing. Although this structure achieves a reduction in programming complexity, it is also typically accompanied by a reduction in solution quality. Most current placement research recognizes that the separation is artificial, and that the placement and routing problems should be solved ideally in tandem. We propose a new interconnection model, Sharp and an associated partitioning algorithm. The Sharp interconnection model uses a partitioning shape that roughly resembles the musical sharp 'number sign' and makes extensive use of pre-computed rectilinear Steiner trees. The model is designed to generate strategic routing information along with the partitioning results. Additionally, the Sharp model also generates estimates of the routing congestion. We also propose the Sharp layout heuristic that solves the layout problem in its entirety. The Sharp layout heuristic makes extensive use of the Sharp partitioning model. The use of precomputed Steiner tree forms enables the method to model accurately net characteristics. For example, the Steiner tree forms can model both the length of the net and more importantly its route. In fact, the tree forms are also appropriate for modeling the timing delays of nets. The Sharp heuristic works to minimize both the total layout area by minimizing total net length (thus reducing the total wiring area), and the congestion imbalances in the various channels (thus reducing the unused or wasted channel area). Our heuristic uses circuit element movements amongst the different partitioning blocks and selection of alternate minimal Steiner tree forms to achieve this goal. The objective function for the algorithm can be modified readily to include other important circuit constraints like propagation delays. The layout technique first computes a very high-level approximation of the layout solution (i.e., the positions of the circuit elements and the associated net routes). The approximate solution is alternately refined, objective function. The technique creates well defined sub-problems and offers intermediary steps that can be solved in parallel, as well as a parallel mechanism to merge the sub-problem solutions.

  9. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  11. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  12. Regeneration of carboxylic acid-laden basic sorbents by leaching with a volatile base in an organic solvent

    DOEpatents

    King, C. Judson; Husson, Scott M.

    1999-01-01

    Carboxylic acids are sorbed from aqueous feedstocks onto a solid adsorbent. The acids are freed from the sorbent phase by treating it with an organic solution of alkylamine thus forming an alkylamine/carboxylic acid complex which is decomposed with improved efficiency to the desired carboxylic acid and the alkylamine. Carbon dioxide addition can be used to improve the adsorption or the carboxylic acids by the solid phase sorbent.

  13. The Design, Synthesis and Screening of Potential Pyridinium Oxime Prodrugs.

    DTIC Science & Technology

    1985-07-31

    copper sulfate pentahydrate , and 15 g (87 mol) of the mixture of bromo- picolines 13c and 131. The combined reactions produced 27 g (96%) of a brown...extracted with ethyl ether. The ether extracts were washed with brine, dried with sodium sulfate , filtered and flashed. The residue was then purified by...stirring to the reaction mix. The addition was exothermic as the copper complexes decomposed. The cooled mixture was extracted with several 20 ml

  14. A finite element method for the thermochemical decomposition of polymeric materials. II - Carbon phenolic composites

    NASA Technical Reports Server (NTRS)

    Sullivan, R. M.; Salamon, N. J.

    1992-01-01

    A previously developed formulation for modeling the thermomechanical behavior of chemically decomposing, polymeric materials is verified by simulating the response of carbon phenolic specimens during two high temperature tests: restrained thermal growth and free thermal expansion. Plane strain and plane stress models are used to simulate the specimen response, respectively. In addition, the influence of the poroelasticity constants upon the specimen response is examined through a series of parametric studies.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J. A. M.; Jiang, J.; Post, W. M.

    Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less

  16. Preservation and rapid purification of DNA from decomposing human tissue samples.

    PubMed

    Sorensen, Amy; Rahman, Elizabeth; Canela, Cassandra; Gangitano, David; Hughes-Stamm, Sheree

    2016-11-01

    One of the key features to be considered in a mass disaster is victim identification. However, the recovery and identification of human remains are sometimes complicated by harsh environmental conditions, limited facilities, loss of electricity and lack of refrigeration. If human remains cannot be collected, stored, or identified immediately, bodies decompose and DNA degrades making genotyping more difficult and ultimately decreasing DNA profiling success. In order to prevent further DNA damage and degradation after collection, tissue preservatives may be used. The goal of this study was to evaluate three customized (modified TENT, DESS, LST) and two commercial DNA preservatives (RNAlater and DNAgard ® ) on fresh and decomposed human skin and muscle samples stored in hot (35°C) and humid (60-70% relative humidity) conditions for up to three months. Skin and muscle samples were harvested from the thigh of three human cadavers placed outdoors for up to two weeks. In addition, the possibility of purifying DNA directly from the preservative solutions ("free DNA") was investigated in order to eliminate lengthy tissue digestion processes and increase throughput. The efficiency of each preservative was evaluated based on the quantity of DNA recovered from both the "free DNA" in solution and the tissue sample itself in conjunction with the quality and completeness of downstream STR profiles. As expected, DNA quantity and STR success decreased with time of decomposition. However, a marked decrease in DNA quantity and STR quality was observed in all samples after the bodies entered the bloat stage (approximately six days of decomposition in this study). Similar amounts of DNA were retrieved from skin and muscle samples over time, but slightly more complete STR profiles were obtained from muscle tissue. Although higher amounts of DNA were recovered from tissue samples than from the surrounding preservative, the average number of reportable alleles from the "free DNA" was comparable. Overall, DNAgard ® and the modified TENT buffer were the most successful tissue preservatives tested in this study based on STR profile success from "free DNA" in solution when decomposing tissues were stored for up to three months in hot, humid conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Influence of Biopreparations on the Bacterial Community of Oily Waste

    NASA Astrophysics Data System (ADS)

    Biktasheva, L. R.; Galitskaya, P. Yu; Selivanovskaya, S. Yu

    2018-01-01

    Oil pollution is reported to be one the most serious environmental problems nowadays. Therefore, methods of remediation of oily polluted soils and oily wastes are of great importance. Bioremediation being a perspective method of sanitation of oil pollutions, includes biostimulation of the polluted sites’ indigenous microflora, and in some cases additional introduction of active strains able to decompose hydrocarbon. The efficacy of introducing such biopreparations depends on the interactions between the introduced microbes and the indigenous ones. In this study, the influence of bacterial consortium (Rhodococcus jialingiae, Stenotrophomonas rhizophila and Pseudomonas gessardii) introduction on the bioremediation of an oily waste sampled from a refinery situated in the Mari El region (Russia) was estimated. Single and multiple inoculations of the consortium in addition to moistening and aeration were compared with a control sample, which included only aeration and moistening of the waste. It was shown, that two of the three introduced strains (Rh. jialingiae and Ps.gessardii) gene copy numbers were higher in the inoculated variants than in the control sample and with their initial counts, which meant that these strains survived and included into the bacterial community of the wastes. At the same time, bacterial counts were significantly lower, and the physiological profile of waste microflora slightly altered in the inoculated remediation variants as compared with the control sample. Interestingly, no difference in the degradation rates of hydrocarbons was revealed in the inoculated remediation variants and the control sample.

  18. Spatio-temporal imaging of the hemoglobin in the compressed breast with diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Boverman, Gregory; Fang, Qianqian; Carp, Stefan A.; Miller, Eric L.; Brooks, Dana H.; Selb, Juliette; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.

    2007-07-01

    We develop algorithms for imaging the time-varying optical absorption within the breast given diffuse optical tomographic data collected over a time span that is long compared to the dynamics of the medium. Multispectral measurements allow for the determination of the time-varying total hemoglobin concentration and of oxygen saturation. To facilitate the image reconstruction, we decompose the hemodynamics in time into a linear combination of spatio-temporal basis functions, the coefficients of which are estimated using all of the data simultaneously, making use of a Newton-based nonlinear optimization algorithm. The solution of the extremely large least-squares problem which arises in computing the Newton update is obtained iteratively using the LSQR algorithm. A Laplacian spatial regularization operator is applied, and, in addition, we make use of temporal regularization which tends to encourage similarity between the images of the spatio-temporal coefficients. Results are shown for an extensive simulation, in which we are able to image and quantify localized changes in both total hemoglobin concentration and oxygen saturation. Finally, a breast compression study has been performed for a normal breast cancer screening subject, using an instrument which allows for highly accurate co-registration of multispectral diffuse optical measurements with an x-ray tomosynthesis image of the breast. We are able to quantify the global return of blood to the breast following compression, and, in addition, localized changes are observed which correspond to the glandular region of the breast.

  19. A domain-decomposed multi-model plasma simulation of collisionless magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Datta, I. A. M.; Shumlak, U.; Ho, A.; Miller, S. T.

    2017-10-01

    Collisionless magnetic reconnection is a process relevant to many areas of plasma physics in which energy stored in magnetic fields within highly conductive plasmas is rapidly converted into kinetic and thermal energy. Both in natural phenomena such as solar flares and terrestrial aurora as well as in magnetic confinement fusion experiments, the reconnection process is observed on timescales much shorter than those predicted by a resistive MHD model. As a result, this topic is an active area of research in which plasma models with varying fidelity have been tested in order to understand the proper physics explaining the reconnection process. In this research, a hybrid multi-model simulation employing the Hall-MHD and two-fluid plasma models on a decomposed domain is used to study this problem. The simulation is set up using the WARPXM code developed at the University of Washington, which uses a discontinuous Galerkin Runge-Kutta finite element algorithm and implements boundary conditions between models in the domain to couple their variable sets. The goal of the current work is to determine the parameter regimes most appropriate for each model to maintain sufficient physical fidelity over the whole domain while minimizing computational expense. This work is supported by a Grant from US AFOSR.

  20. Unified Generic Geometric-Decompositions for Consensus or Flocking Systems of Cooperative Agents and Fast Recalculations of Decomposed Subsystems Under Topology-Adjustments.

    PubMed

    Li, Wei

    2016-06-01

    This paper considers a unified geometric projection approach for: 1) decomposing a general system of cooperative agents coupled via Laplacian matrices or stochastic matrices and 2) deriving a centroid-subsystem and many shape-subsystems, where each shape-subsystem has the distinct properties (e.g., preservation of formation and stability of the original system, sufficiently simple structures and explicit formation evolution of agents, and decoupling from the centroid-subsystem) which will facilitate subsequent analyses. Particularly, this paper provides an additional merit of the approach: considering adjustments of coupling topologies of agents which frequently occur in system design (e.g., to add or remove an edge, to move an edge to a new place, and to change the weight of an edge), the corresponding new shape-subsystems can be derived by a few simple computations merely from the old shape-subsystems and without referring to the original system, which will provide further convenience for analysis and flexibility of choice. Finally, such fast recalculations of new subsystems under topology adjustments are provided with examples.

  1. Normal Mode Analysis on the Relaxation of AN Excited Nitromethane Molecule in Argon Bath

    NASA Astrophysics Data System (ADS)

    Rivera-Rivera, Luis A.; Wagner, Albert F.

    2017-06-01

    In our previous work [Rivera-Rivera et al. J. Chem. Phys. 142, 014303 (2015).] classical molecular dynamics simulations followed, in an Ar bath, the relaxation of nitromethane (CH_3NO_2) instantaneously excited by statistically distributing 50 kcal/mol among all its internal degrees of freedom. The 300 K Ar bath was at pressures of 10 to 400 atm. Both rotational and vibrational energies exhibited multi-exponential decay. This study explores mode-specific mechanisms at work in the decay process. With the separation of rotation and vibration developed by Rhee and Kim [J. Chem. Phys. 107, 1394 (1997).], one can show that the vibrational kinetic energy decomposes only into vibrational normal modes while the rotational and Coriolis energies decompose into both vibrational and rotational normal modes. Then the saved CH_3NO_2 positions and momenta can be converted into mode-specific energies whose decay over 1000 ps can be monitored. The results identify vibrational and rotational modes that promote/resist energy lost and drive multi-exponential behavior. In addition to mode-specificity, the results show disruption of IVR with increasing pressure.

  2. Bio-reduction of free and laden perchlorate by the pure and mixed perchlorate reducing bacteria: Considering the pH and coexisting nitrate.

    PubMed

    Shang, Yanan; Wang, Ziyang; Xu, Xing; Gao, Baoyu; Ren, Zhongfei

    2018-08-01

    Pure bacteria cell (Azospira sp. KJ) and mixed perchlorate reducing bacteria (MPRB) were employed for decomposing the free perchlorate in water as well as the laden perchlorate on surface of quaternary ammonium wheat residuals (QAWR). Results indicated that perchlorate was decomposed by the Azospira sp. KJ prior to nitrate while MPRB was just the reverse. Bio-reduction of laden perchlorate by Azospira sp. KJ was optimal at pH 8.0. In contrast, bio-reduction of laden perchlorate by MPRB was optimal at pH 7.0. Generally, the rate of perchlorate reduction was controlled by the enzyme activity of PRB. In addition, perchlorate recovery (26.0 mg/g) onto bio-regenerated QAWR by MPRB was observed with a small decrease as compared with that (31.1 mg/g) by Azospira sp. KJ at first 48 h. Basically, this study is expected to offer some different ideas on bio-regeneration of perchlorate-saturated adsorbents using biological process, which may provide the economically alternative to conventional methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Local geology determines responses of stream producers and fungal decomposers to nutrient enrichment: A field experiment.

    PubMed

    Mykrä, Heikki; Sarremejane, Romain; Laamanen, Tiina; Karjalainen, Satu Maaria; Markkola, Annamari; Lehtinen, Sirkku; Lehosmaa, Kaisa; Muotka, Timo

    2018-04-16

    We examined how short-term (19 days) nutrient enrichment influences stream fungal and diatom communities, and rates of leaf decomposition and algal biomass accrual. We conducted a field experiment using slow-releasing nutrient pellets to increase nitrate (NO 3 -N) and phosphate (PO 4 -P) concentrations in a riffle section of six naturally acidic (naturally low pH due to catchment geology) and six circumneutral streams. Nutrient enrichment increased microbial decomposition rate on average by 14%, but the effect was significant only in naturally acidic streams. Nutrient enrichment also decreased richness and increased compositional variability of fungal communities in naturally acidic streams. Algal biomass increased in both stream types, but algal growth was overall very low. Diatom richness increased in response to nutrient addition by, but only in circumneutral streams. Our results suggest that primary producers and decomposers are differentially affected by nutrient enrichment and that their responses to excess nutrients are context dependent, with a potentially stronger response of detrital processes and fungal communities in naturally acidic streams than in less selective environments.

  4. A novel method to decompose two potent greenhouse gases: photoreduction of SF6 and SF5CF3 in the presence of propene.

    PubMed

    Huang, Li; Shen, Yan; Dong, Wenbo; Zhang, Renxi; Zhang, Jianliang; Hou, Huiqi

    2008-03-01

    SF5CF3 and SF6 are the most effective greenhouse gases on a per molecule basis in the atmosphere. Original laboratory trial for photoreduction of them by use of propene as a reactant was performed to develop a novel technique to destroy them. The highly reductive radicals produced during the photolysis of propene at 184.9 nm, such as .CH3, .C2H3, and .C3H5, could efficiently decompose SF6 and SF5CF3 to CH4, elemental sulfur and trace amounts of fluorinated organic compounds. It was further demonstrated that the destruction and removal efficiency (DRE) of SF5X (X represented F or CF3) was highly dependent on the initial propene-to-SF5X ratio. The addition of certain amounts of oxygen and water vapor not only enhanced the DRE but avoided the generation of deposits. In both systems, employment nitrogen as dilution gas lessened the DRE slightly. Given the advantage of less toxic products, the technique might contribute to SF5X remediation.

  5. Method for preparing a thick film conductor

    DOEpatents

    Nagesh, Voddarahalli K.; Fulrath, deceased, Richard M.

    1978-01-01

    A method for preparing a thick film conductor which comprises providing surface active glass particles, mixing the surface active glass particles with a thermally decomposable organometallic compound, for example, a silver resinate, and then decomposing the organometallic compound by heating, thereby chemically depositing metal on the glass particles. The glass particle mixture is applied to a suitable substrate either before or after the organometallic compound is thermally decomposed. The resulting system is then fired in an oxidizing atmosphere, providing a microstructure of glass particles substantially uniformly coated with metal.

  6. Atom economy and green elimination of nitric oxide using ZrN powders.

    PubMed

    Chen, Ning; Wang, Jigang; Yin, Wenyan; Li, Zhen; Li, Peishen; Guo, Ming; Wang, Qiang; Li, Chunlei; Wang, Changzheng; Chen, Shaowei

    2018-05-01

    Nitric oxide (NO) may cause serious environmental problems, such as acid rain, haze weather, global warming and even death. Herein, a new low-cost, highly efficient and green method for the elimination of NO using zirconium nitride (ZrN) is reported for the first time, which does not produce any waste or any by-product. Relevant experimental parameters, such as reaction temperature and gas concentration, were investigated to explore the reaction mechanism. Interestingly, NO can be easily decomposed into nitrogen (N 2 ) by ZrN powders at 600°C with ZrN simultaneously transformed into zirconium dioxide (ZrO 2 ) gradually. The time for the complete conversion of NO into N 2 was approximately 14 h over 0.5 g of ZrN at a NO concentration of 500 ppm. This green elimination process of NO demonstrated good atom economy and practical significance in mitigating environmental problems.

  7. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  8. The Caltech Concurrent Computation Program - Project description

    NASA Technical Reports Server (NTRS)

    Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.

    1985-01-01

    The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.

  9. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  10. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  11. Decentralized control of large flexible structures by joint decoupling

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Juang, Jer-Nan

    1994-01-01

    This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.

  12. A decomposition approach to the design of a multiferroic memory bit

    NASA Astrophysics Data System (ADS)

    Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.

    2017-06-01

    The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.

  13. Comparison of dual and single exposure techniques in dual-energy chest radiography.

    PubMed

    Ho, J T; Kruger, R A; Sorenson, J A

    1989-01-01

    Conventional chest radiography is the most effective tool for lung cancer detection and diagnosis; nevertheless, a high percentage of lung cancer tumors are missed because of the overlap of lung nodule image contrast with bone image contrast in a chest radiograph. Two different energy subtraction strategies, dual exposure and single exposure techniques, were studied for decomposing a radiograph into bone-free and soft tissue-free images to address this problem. For comparing the efficiency of these two techniques in lung nodule detection, the performances of the techniques were evaluated on the basis of residual tissue contrast, energy separation, and signal-to-noise ratio. The evaluation was based on both computer simulation and experimental verification. The dual exposure technique was found to be better than the single exposure technique because of its higher signal-to-noise ratio and greater residual tissue contrast. However, x-ray tube loading and patient motion are problems.

  14. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  15. Investigation of automated task learning, decomposition and scheduling

    NASA Technical Reports Server (NTRS)

    Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.

    1990-01-01

    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.

  16. Network exploitation using WAMI tracks

    NASA Astrophysics Data System (ADS)

    Rimey, Ray; Record, Jim; Keefe, Dan; Kennedy, Levi; Cramer, Chris

    2011-06-01

    Creating and exploiting network models from wide area motion imagery (WAMI) is an important task for intelligence analysis. Tracks of entities observed moving in the WAMI sensor data are extracted, then large numbers of tracks are studied over long time intervals to determine specific locations that are visited (e.g., buildings in an urban environment), what locations are related to other locations, and the function of each location. This paper describes several parts of the network detection/exploitation problem, and summarizes a solution technique for each: (a) Detecting nodes; (b) Detecting links between known nodes; (c) Node attributes to characterize a node; (d) Link attributes to characterize each link; (e) Link structure inferred from node attributes and vice versa; and (f) Decomposing a detected network into smaller networks. Experimental results are presented for each solution technique, and those are used to discuss issues for each problem part and its solution technique.

  17. Extension of the frequency-domain pFFT method for wave structure interaction in finite depth

    NASA Astrophysics Data System (ADS)

    Teng, Bin; Song, Zhi-jie

    2017-06-01

    To analyze wave interaction with a large scale body in the frequency domain, a precorrected Fast Fourier Transform (pFFT) method has been proposed for infinite depth problems with the deep water Green function, as it can form a matrix with Toeplitz and Hankel properties. In this paper, a method is proposed to decompose the finite depth Green function into two terms, which can form matrices with the Toeplitz and a Hankel properties respectively. Then, a pFFT method for finite depth problems is developed. Based on the pFFT method, a numerical code pFFT-HOBEM is developed with the discretization of high order elements. The model is validated, and examinations on the computing efficiency and memory requirement of the new method have also been carried out. It shows that the new method has the same advantages as that for infinite depth.

  18. Diversity of Riparian Plants among and within Species Shapes River Communities

    PubMed Central

    Jackrel, Sara L.; Wootton, J. Timothy

    2015-01-01

    Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects. PMID:26539714

  19. Diversity of Riparian Plants among and within Species Shapes River Communities.

    PubMed

    Jackrel, Sara L; Wootton, J Timothy

    2015-01-01

    Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects.

  20. Ozone decomposing filter

    DOEpatents

    Simandl, Ronald F.; Brown, John D.; Whinnery, Jr., LeRoy L.

    1999-01-01

    In an improved ozone decomposing air filter carbon fibers are held together with a carbonized binder in a perforated structure. The structure is made by combining rayon fibers with gelatin, forming the mixture in a mold, freeze-drying, and vacuum baking.

  1. Reactive codoping of GaAlInP compound semiconductors

    DOEpatents

    Hanna, Mark Cooper [Boulder, CO; Reedy, Robert [Golden, CO

    2008-02-12

    A GaAlInP compound semiconductor and a method of producing a GaAlInP compound semiconductor are provided. The apparatus and method comprises a GaAs crystal substrate in a metal organic vapor deposition reactor. Al, Ga, In vapors are prepared by thermally decomposing organometallic compounds. P vapors are prepared by thermally decomposing phospine gas, group II vapors are prepared by thermally decomposing an organometallic group IIA or IIB compound. Group VIB vapors are prepared by thermally decomposing a gaseous compound of group VIB. The Al, Ga, In, P, group II, and group VIB vapors grow a GaAlInP crystal doped with group IIA or IIB and group VIB elements on the substrate wherein the group IIA or IIB and a group VIB vapors produced a codoped GaAlInP compound semiconductor with a group IIA or IIB element serving as a p-type dopant having low group II atomic diffusion.

  2. sdg interacting-boson model in the SU(3) scheme and its application to 168Er

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Akiyama, Y.; Arima, A.

    1988-07-01

    The sdg interacting-boson model is presented in the SU(3) tensor formalism. The interactions are decomposed according to their SU(3) tensor character. The existence of the SU(3)-seniority preserving operator is found to be important. The model is applied to 168Er. Energy levels and electromagnetic transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. E4 transitions are calculated to give different predictions from those by the quasiparticle-phonon nuclear model.

  3. Magnetic resonance imaging as a tool for extravehicular activity analysis

    NASA Technical Reports Server (NTRS)

    Dickenson, R.; Lorenz, C.; Peterson, S.; Strauss, A.; Main, J.

    1992-01-01

    The purpose of this research is to examine the value of magnetic resonance imaging (MRI) as a means of conducting kinematic studies of the hand for the purpose of EVA capability enhancement. After imaging the subject hand using a magnetic resonance scanner, the resulting 2D slices were reconstructed into a 3D model of the proximal phalanx of the left hand. Using the coordinates of several landmark positions, one is then able to decompose the motion of the rigid body. MRI offers highly accurate measurements due to its tomographic nature without the problems associated with other imaging modalities for in vivo studies.

  4. On the strain energy of laminated composite plates

    NASA Technical Reports Server (NTRS)

    Atilgan, Ali R.; Hodges, Dewey H.

    1991-01-01

    The present effort to obtain the asymptotically correct form of the strain energy in inhomogeneous laminated composite plates proceeds from the geometrically nonlinear elastic theory-based three-dimensional strain energy by decomposing the nonlinear three-dimensional problem into a linear, through-the-thickness analysis and a nonlinear, two-dimensional analysis analyzing plate formation. Attention is given to the case in which each lamina exhibits material symmetry about its middle surface, deriving closed-form analytical expressions for the plate elastic constants and the displacement and strain distributions through the plate's thickness. Despite the simplicity of the plate strain energy's form, there are no restrictions on the magnitudes of displacement and rotation measures.

  5. Optimization-based manufacturing scheduling with multiple resources and setup requirements

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.

    1998-10-01

    The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.

  6. The Effect of Soil Warming on Decomposition of Biochar, Wood, and Bulk Soil Organic Carbon in Contrasting Temperate and Tropical Soils

    NASA Astrophysics Data System (ADS)

    Torn, Margaret; Tas, Neslihan; Reichl, Ken; Castanha, Cristina; Fischer, Marc; Abiven, Samuel; Schmidt, Michael; Brodie, Eoin; Jansson, Janet

    2013-04-01

    Biochar and wood are known to decay at different rates in soil, but the longterm effect of char versus unaltered wood inputs on soil carbon dynamics may vary by soil ecosystem and by their sensitivity to warming. We conducted an incubation experiment to explore three questions: (1) How do decomposition rates of char and wood vary with soil type and depth? (2) How vulnerable to warming are these slowly decomposing inputs? And (3) Do char or wood additions increase loss of native soil organic carbon (priming)? Soils from a Mediterranean grassland (Hopland Experimental Research Station, California) and a moist tropical forest (Tabunoco Forest, Puerto Rico) were collected from two soil depths and incubated at ambient temperature (14°C, 20°C for Hopland and Tabonuco respectively) and ambient +6°C. We added 13C-labeled wood and char (made from the wood at 450oC) to the soils and quantified CO2 and 13CO2 fluxes with continuous online carbon isotope measurements using a Cavity Ringdown Spectrometer (Picarro, Inc) for one year. As expected, in all treatments the wood decomposed much (about 50 times) more quickly than did the char amendment. With few exceptions, amendments placed in the surface soil decomposed more quickly than those in deeper soil, and in forest soil faster than that placed in grassland soil, at the same temperature. The two substrates were not very temperature sensitive. Both had Q10 less than 2 and char decomposition in particular was relatively insensitive to warming. Finally, the addition of wood caused a significant increase of roughly 30% in decomposition losses of the native soil organic carbon in the grassland and slightly less in forest. Char had only a slight positive priming effect but had a significant effect on microbial community. These results show that conversion of wood inputs to char through wildfire or intentional management will alter not only the persistence of the carbon in soil but also its temperature response and effect on microbial communities.

  7. Super-resolution with an SLM and two intensity images

    NASA Astrophysics Data System (ADS)

    Alcalá Ochoa, Noé; de León, Y. Ponce

    2018-06-01

    It is reported a method which may simplify the optical setups used to achieve super-resolution through the amplitude multiplication of two waves. For this end we decompose a super-resolving pupil into two complex masks and with the aid of a Spatial Light Modulator (LCoS) we obtain two intensity images that are subtracted. With this proposal, the traditional experimental optical setups are considerably simplified, with the additional benefit that different masks can be utilized without needing to perform the setup alignment each time.

  8. Estimate of fine root production including the impact of decomposed roots in a Bornean tropical rainforest

    NASA Astrophysics Data System (ADS)

    Katayama, Ayumi; Khoon Koh, Lip; Kume, Tomonori; Makita, Naoki; Matsumoto, Kazuho; Ohashi, Mizue

    2016-04-01

    Considerable carbon is allocated belowground and used for respiration and production of roots. It is reported that approximately 40 % of GPP is allocated belowground in a Bornean tropical rainforest, which is much higher than those in Neotropical rainforests. This may be caused by high root production in this forest. Ingrowth core is a popular method for estimating fine root production, but recent study by Osawa et al. (2012) showed potential underestimates of this method because of the lack of consideration of the impact of decomposed roots. It is important to estimate fine root production with consideration for the decomposed roots, especially in tropics where decomposition rate is higher than other regions. Therefore, objective of this study is to estimate fine root production with consideration of decomposed roots using ingrowth cores and root litter-bag in the tropical rainforest. The study was conducted in Lambir Hills National Park in Borneo. Ingrowth cores and litter bags for fine roots were buried in March 2013. Eighteen ingrowth cores and 27 litter bags were collected in May, September 2013, March 2014 and March 2015, respectively. Fine root production was comparable to aboveground biomass increment and litterfall amount, and accounted only 10% of GPP in this study site, suggesting most of the carbon allocated to belowground might be used for other purposes. Fine root production was comparable to those in Neotropics. Decomposed roots accounted for 18% of fine root production. This result suggests that no consideration of decomposed fine roots may cause underestimate of fine root production.

  9. Flawed foundations of associationism? Comments on Machado and Silva (2007).

    PubMed

    Gallistel, C R

    2007-10-01

    A. Machado and F. J. Silva have spotted an important conceptual problem in scalar expectancy theory's account of the 2-standard-interval time-left experiment. C. R. Gallistel and J. Gibbon (2000) were aware of it but did not discuss it for historical and sociological reasons, owned up to in this article. A problem of broader significance for psychology, cognitive science, neuroscience, and the philosophy of mind concerns the closely related concepts of a trial and of temporal pairing, which are foundational in associative theories of learning and memory. Association formation is assumed to depend on the temporal pairing of the to-be-associated events. In modeling it, theorists have assumed continuous time to be decomposable into trials. But life is not composed of trials, and attempts to specify the conditions under which two events may be regarded as temporally paired have never succeeded. Thus, associative theories of learning and memory are built on conceptual sand. Undeterred, neuroscientists have defined the neurobiology-of-memory problem as the problem of determining the cellular and molecular mechanism of association formation, and connectionist modelers have made it a cornerstone of their efforts. More conceptual analysis is indeed needed. Copyright 2007 APA, all rights reserved.

  10. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  11. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  12. PEROXIDE DESTRUCTION TESTING FOR THE 200 AREA EFFLUENT TREATMENT FACILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HALGREN DL

    2010-03-12

    The hydrogen peroxide decomposer columns at the 200 Area Effluent Treatment Facility (ETF) have been taken out of service due to ongoing problems with particulate fines and poor destruction performance from the granular activated carbon (GAC) used in the columns. An alternative search was initiated and led to bench scale testing and then pilot scale testing. Based on the bench scale testing three manganese dioxide based catalysts were evaluated in the peroxide destruction pilot column installed at the 300 Area Treated Effluent Disposal Facility. The ten inch diameter, nine foot tall, clear polyvinyl chloride (PVC) column allowed for the samemore » six foot catalyst bed depth as is in the existing ETF system. The flow rate to the column was controlled to evaluate the performance at the same superficial velocity (gpm/ft{sup 2}) as the full scale design flow and normal process flow. Each catalyst was evaluated on peroxide destruction performance and particulate fines capacity and carryover. Peroxide destruction was measured by hydrogen peroxide concentration analysis of samples taken before and after the column. The presence of fines in the column headspace and the discharge from carryover was generally assessed by visual observation. All three catalysts met the peroxide destruction criteria by achieving hydrogen peroxide discharge concentrations of less than 0.5 mg/L at the design flow with inlet peroxide concentrations greater than 100 mg/L. The Sud-Chemie T-2525 catalyst was markedly better in the minimization of fines and particle carryover. It is anticipated the T-2525 can be installed as a direct replacement for the GAC in the peroxide decomposer columns. Based on the results of the peroxide method development work the recommendation is to purchase the T-2525 catalyst and initially load one of the ETF decomposer columns for full scale testing.« less

  13. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. From the total budget of $5,000, Tricia and I studied the problem domain for developing ail Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy datasets. From the study and discussion with NASA LERC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of the data for GA based multi-resolution optimal search. Wavelet processing is proposed to create a coarse resolution representation of data providing two advantages in GA based search: 1. We will have less data to begin with to make search sub-spaces. 2. It will have robustness against the noise because at every level of wavelet based decomposition, we will be decomposing the signal into low pass and high pass filters.

  14. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  15. Vulnerability assessment of urban ecosystems driven by water resources, human health and atmospheric environment

    NASA Astrophysics Data System (ADS)

    Shen, Jing; Lu, Hongwei; Zhang, Yang; Song, Xinshuang; He, Li

    2016-05-01

    As ecosystem management is a hotspot and urgent topic with increasing population growth and resource depletion. This paper develops an urban ecosystem vulnerability assessment method representing a new vulnerability paradigm for decision makers and environmental managers, as it's an early warning system to identify and prioritize the undesirable environmental changes in terms of natural, human, economic and social elements. The whole idea is to decompose a complex problem into sub-problem, and analyze each sub-problem, and then aggregate all sub-problems to solve this problem. This method integrates spatial context of Geographic Information System (GIS) tool, multi-criteria decision analysis (MCDA) method, ordered weighted averaging (OWA) operators, and socio-economic elements. Decision makers can find out relevant urban ecosystem vulnerability assessment results with different vulnerable attitude. To test the potential of the vulnerability methodology, it has been applied to a case study area in Beijing, China, where it proved to be reliable and consistent with the Beijing City Master Plan. The results of urban ecosystem vulnerability assessment can support decision makers in evaluating the necessary of taking specific measures to preserve the quality of human health and environmental stressors for a city or multiple cities, with identifying the implications and consequences of their decisions.

  16. Model reduction method using variable-separation for stochastic saddle point problems

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  17. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  18. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  19. Decomposers and the fire cycle in a phryganic (East Mediterranean) ecosystem.

    PubMed

    Arianoutsou-Faraggitaki, M; Margaris, N S

    1982-06-01

    Dehydrogenase activity, cellulose decomposition, nitrification, and CO2 release were measured for 2 years to estimate the effects of a wildfire over a phryganic ecosystem. In decomposers' subsystem we found that fire mainly affected the nitrification process during the whole period, and soil respiration for the second post-fire year, when compared with the control site. Our data suggest that after 3-4 months the activity of microbial decomposers is almost the same at the two sites, suggesting that fire is not a catastrophic event, but a simple perturbation common to Mediterranean-type ecosystems.

  20. Numerical analysis on effect of aspect ratio of planar solid oxide fuel cell fueled with decomposed ammonia

    NASA Astrophysics Data System (ADS)

    Tan, Wee Choon; Iwai, Hiroshi; Kishimoto, Masashi; Brus, Grzegorz; Szmyd, Janusz S.; Yoshida, Hideo

    2018-04-01

    Planar solid oxide fuel cells (SOFCs) with decomposed ammonia are numerically studied to investigate the effect of the cell aspect ratio. The ammonia decomposer is assumed to be located next to the SOFCs, and the heat required for the endothermic decomposition reaction is supplied by the thermal radiation from the SOFCs. Cells with aspect ratios (ratios of the streamwise length to the spanwise width) between 0.130 and 7.68 are provided with the reactants at a constant mass flow rate. A parametric study is conducted by varying the cell temperature and fuel utility factor to investigate their effects on the cell performance in terms of the voltage efficiency. The effect of the heat supply to the ammonia decomposer is also studied. The developed model shows good agreement, in terms of the current-voltage curve, with the experimental data obtained from a short stack without parameter tuning. The simulation study reveals that the cell with the highest aspect ratio achieves the highest performance under furnace operation. On the other hand, the 0.750 aspect ratio cell with the highest voltage efficiency of 0.67 is capable of thermally sustaining the ammonia decomposers at a fuel utility of 0.80 using the thermal radiation from both sidewalls.

  1. Decomposition by ectomycorrhizal fungi alters soil carbon storage in a simulation model

    DOE PAGES

    Moore, J. A. M.; Jiang, J.; Post, W. M.; ...

    2015-03-06

    Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less

  2. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    PubMed

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The analysis also showed that a satisfactory level of performance with a small probability of failure was produced for the standard practice design of waste landfills as well as an analysis scenario immediately after the landfill closure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Immobilization of Candida rugosa lipase by adsorption-crosslinking onto corn husk

    NASA Astrophysics Data System (ADS)

    Nuraliyah, A.; Wijanarko, A.; Hermansyah, H.

    2018-04-01

    Corn husk is one of the agricutural waste that has not been used optimally. corn husk waste allows to be used as immobilized support for biocatalyst because it is easy to obtain, available abundant, renewable and easy to decompose. This research was conducted in two phases, namely the adsorption of enzyme immobilization on the support, followed by cross- linking between the enzyme and support through the addition of glutaraldehyde. The optimum conditions for cross-linked adsorption immobilization using support of corn husk were achieved at concentrations of 0,75 mg / ml at 4 hour reaction time. The biggest unit activity value is obtained at 2,37 U / g support through 0.5% glutaraldehyde addition.

  4. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  5. Evaluating oxidation-reduction properties of dissolved organic matter from Chinese milk vetch (Astragalus sinicus L.): a comprehensive multi-parametric study.

    PubMed

    Liu, Yong; Lou, Jun; Li, Fang-Bai; Xu, Jian-Ming; Yu, Xiong-Sheng; Zhu, Li-An; Wang, Feng

    2014-08-01

    Green manuring is a common practice in replenishment of soil organic matter and nutrients in rice paddy field. Owing to the complex interplay of multiple factors, the oxidation--reduction (redox) properties of dissolved organic matter (DOM) from green manure crops are presently not fully understood. In this study, a variety of surrogate parameters were used to evaluate the redox capacity and redox state of DOM derived from Chinese milk vetch (CMV, Astragalus sinicus L.) via microbial decomposition under continuously flooded (CF) and non-flooded (NF) conditions. Additionally, the correlation between the surrogate parameters of CMV-DOM and the kinetic parameters of relevant redox reactions was evaluated in a soil-water system containing CMV-DOM. Results showed that the redox properties of CMV-DOM were substantially different between the fresh and decomposed CMV-DOM treatments. Determination of the surrogate parameters via ultraviolet-visible/Fourier transform infrared absorption spectroscopy and gel permeation chromatography generally provided high-quality data for predicting the redox capacity of CMV-DOM, while the surrogate parameters determined by elemental analysis were suitable for predicting the redox state of CMV-DOM. Depending on the redox capacity and redox state of various moieties/components, NF-decomposed CMV-DOM could easily accelerate soil reduction by shuttling electrons to iron oxides, because it contained more reversible redox-active functional groups (e.g. quinone and hydroquinone pairs) than CF-decomposed CMV-DOM. This work demonstrates that a single index cannot interpret complex changes in multiple factors that jointly determine the redox reactivity of CMV-DOM. Thus, a multi-parametric study is needed for providing comprehensive information on the redox properties of green manure DOM.

  6. Decomposing properties of phosphogypsum with iron addition under two-step cycle multi-atmosphere control in fluidised bed.

    PubMed

    Zheng, Dalong; Ma, Liping; Wang, Rongmou; Yang, Jie; Dai, Quxiu

    2018-02-01

    Phosphogypsum is a solid industry by-product generated when sulphuric acid is used to process phosphate ore into fertiliser. Phosphogypsum stacks without pretreatment are often piled on the land surface or dumped in the sea, causing significant environmental damage. This study examined the reaction characteristics of phosphogypsum, when decomposed in a multi-atmosphere fluidised bed. Phosphogypsum was first dried, sieved and mixed proportionally with lignite at the mass ratio of 10:1, it was then immersed in 0.8 [Formula: see text] with a solid-liquid ratio of 8:25. The study included a two-step cycle of multi-atmosphere control. First, a reducing atmosphere was provided to allow phosphogypsum decomposition through partial lignite combustion. After the reduction stage reaction was completed, the reducing atmosphere was changed into an air-support oxidising atmosphere at the constant temperature. Each atmosphere cycle had a conversion time of 30 min to ensure a sufficient reaction. The decomposing properties of phosphogypsum were obtained in different atmosphere cycles, at different reaction temperatures, different heating rates and different fluidised gas velocities, using experimental results combined with a theoretical analysis using FactSage 7.0 Reaction module. The study revealed that the optimum reaction condition was to circulate the atmosphere twice at a temperature of 1100 °C. The heating rate above 800 °C was 5 [Formula: see text], and the fluidised gas velocity was 0.40 [Formula: see text]. The procedure proposed in this article can serve as a phosphogypsum decomposition solution, and can support the future management of this by-product, resulting in more sustainable production.

  7. Comparative sensitivity and inhibitor tolerance of GlobalFiler® PCR Amplification and Investigator® 24plex QS kits for challenging samples.

    PubMed

    Elwick, Kyleen; Mayes, Carrie; Hughes-Stamm, Sheree

    2018-05-01

    In cases such as mass disasters or missing persons, human remains are challenging to identify as they may be fragmented, burnt, been buried, decomposed, and/or contain inhibitory substances. This study compares the performance of a relatively new STR kit in the US market (Investigator® 24plex QS kit; Qiagen) with the GlobalFiler® PCR Amplification kit (Thermo Fisher Scientific) when genotyping highly inhibited and low level DNA samples. In this study, DNA samples ranging from 1 ng to 7.8 pg were amplified to define the sensitivity of two systems. In addition, DNA (1 ng and 0.1 ng input amounts) was spiked with various concentrations of five inhibitors common to human remains (humic acid, melanin, hematin, collagen, calcium). Furthermore, bone (N = 5) and tissue samples from decomposed human remains (N = 6) were used as mock casework samples for comparative analysis with both STR kits. The data suggest that the GlobalFiler® kit may be slightly more sensitive than the Investigator® kit. On average STR profiles appeared to be more balanced and average peak heights were higher when using the GlobalFiler® kit. However, the data also show that the Investigator® kit may be more tolerant to common PCR inhibitors. While both STR kits showed a decrease in alleles as the inhibitor concentration increased, more complete profiles were obtained when the Investigator® kit was used. Of the 11 bone and decomposed tissue samples tested, 8 resulted in more complete and balanced STR profiles when amplified with the GlobalFiler® kit. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Decreases in Soil Moisture and Organic Matter Quality Suppress Microbial Decomposition Following a Boreal Forest Fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holden, Sandra R.; Berhe, Asmeret A.; Treseder, Kathleen K.

    Climate warming is projected to increase the frequency and severity of wildfires in boreal forests, and increased wildfire activity may alter the large soil carbon (C) stocks in boreal forests. Changes in boreal soil C stocks that result from increased wildfire activity will be regulated in part by the response of microbial decomposition to fire, but post-fire changes in microbial decomposition are poorly understood. Here, we investigate the response of microbial decomposition to a boreal forest fire in interior Alaska and test the mechanisms that control post-fire changes in microbial decomposition. We used a reciprocal transplant between a recently burnedmore » boreal forest stand and a late successional boreal forest stand to test how post-fire changes in abiotic conditions, soil organic matter (SOM) composition, and soil microbial communities influence microbial decomposition. We found that SOM decomposing at the burned site lost 30.9% less mass over two years than SOM decomposing at the unburned site, indicating that post-fire changes in abiotic conditions suppress microbial decomposition. Our results suggest that moisture availability is one abiotic factor that constrains microbial decomposition in recently burned forests. In addition, we observed that burned SOM decomposed more slowly than unburned SOM, but the exact nature of SOM changes in the recently burned stand are unclear. Finally, we found no evidence that post-fire changes in soil microbial community composition significantly affect decomposition. Taken together, our study has demonstrated that boreal forest fires can suppress microbial decomposition due to post-fire changes in abiotic factors and the composition of SOM. Models that predict the consequences of increased wildfires for C storage in boreal forests may increase their predictive power by incorporating the observed negative response of microbial decomposition to boreal wildfires.« less

  9. [Release and supplement of carbon, nitrogen and phosphorus from jellyfish (Nemopilema nomurai) decomposition in seawater].

    PubMed

    Qu, Chang-feng; Song, Jin-ming; Li, Ning; Li, Xue-gang; Yuan, Hua-mao; Duan, Li-qin

    2016-01-01

    Abstract: Jellyfish bloom has been increasing in Chinese seas and decomposition after jellyfish bloom has great influences on marine ecological environment. We conducted the incubation of Nemopilema nomurai decomposing to evaluate its effect on carbon, nitrogen and phosphorus recycling of water column by simulated experiments. The results showed that the processes of jellyfish decomposing represented a fast release of biogenic elements, and the release of carbon, nitrogen and phosphorus reached the maximum at the beginning of jellyfish decomposing. The release of biogenic elements from jellyfish decomposition was dominated by dissolved matter, which had a much higher level than particulate matter. The highest net release rates of dissolved organic carbon and particulate organic carbon reached (103.77 ± 12.60) and (1.52 ± 0.37) mg · kg⁻¹ · h⁻¹, respectively. The dissolved nitrogen was dominated by NH₄⁺-N during the whole incubation time, accounting for 69.6%-91.6% of total dissolved nitrogen, whereas the dissolved phosphorus was dominated by dissolved organic phosphorus during the initial stage of decomposition, being 63.9%-86.7% of total dissolved phosphorus and dominated by PO₄³⁻-P during the late stage of decomposition, being 50.4%-60.2%. On the contrary, the particulate nitrogen was mainly in particulate organic nitrogen, accounting for (88.6 ± 6.9) % of total particulate nitrogen, whereas the particulate phosphorus was mainly in particulate. inorganic phosphorus, accounting for (73.9 ±10.5) % of total particulate phosphorus. In addition, jellyfish decomposition decreased the C/N and increased the N/P of water column. These indicated that jellyfish decomposition could result in relative high carbon and nitrogen loads.

  10. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods

    PubMed Central

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2018-01-01

    Background: Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results: The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion: This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. PMID:29325403

  11. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    PubMed

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  12. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  13. Leaf Litter Mixtures Alter Microbial Community Development: Mechanisms for Non-Additive Effects in Litter Decomposition

    PubMed Central

    Chapman, Samantha K.; Newman, Gregory S.; Hart, Stephen C.; Schweitzer, Jennifer A.; Koch, George W.

    2013-01-01

    To what extent microbial community composition can explain variability in ecosystem processes remains an open question in ecology. Microbial decomposer communities can change during litter decomposition due to biotic interactions and shifting substrate availability. Though relative abundance of decomposers may change due to mixing leaf litter, linking these shifts to the non-additive patterns often recorded in mixed species litter decomposition rates has been elusive, and links community composition to ecosystem function. We extracted phospholipid fatty acids (PLFAs) from single species and mixed species leaf litterbags after 10 and 27 months of decomposition in a mixed conifer forest. Total PLFA concentrations were 70% higher on litter mixtures than single litter types after 10 months, but were only 20% higher after 27 months. Similarly, fungal-to-bacterial ratios differed between mixed and single litter types after 10 months of decomposition, but equalized over time. Microbial community composition, as indicated by principal components analyses, differed due to both litter mixing and stage of litter decomposition. PLFA biomarkers a15∶0 and cy17∶0, which indicate gram-positive and gram-negative bacteria respectively, in particular drove these shifts. Total PLFA correlated significantly with single litter mass loss early in decomposition but not at later stages. We conclude that litter mixing alters microbial community development, which can contribute to synergisms in litter decomposition. These findings advance our understanding of how changing forest biodiversity can alter microbial communities and the ecosystem processes they mediate. PMID:23658639

  14. The FPase properties and morphology changes of a cellulolytic bacterium, Sporocytophaga sp. JL-01, on decomposing filter paper cellulose.

    PubMed

    Wang, Xiuran; Peng, Zhongqi; Sun, Xiaoling; Liu, Dongbo; Chen, Shan; Li, Fan; Xia, Hongmei; Lu, Tiancheng

    2012-01-01

    Sporocytophaga sp. JL-01 is a sliding cellulose degrading bacterium that can decompose filter paper (FP), carboxymethyl cellulose (CMC) and cellulose CF11. In this paper, the morphological characteristics of S. sp. JL-01 growing in FP liquid medium was studied by Scanning Electron Microscope (SEM), and one of the FPase components of this bacterium was analyzed. The results showed that the cell shapes were variable during the process of filter paper cellulose decomposition and the rod shape might be connected with filter paper decomposing. After incubating for 120 h, the filter paper was decomposed significantly, and it was degraded absolutely within 144 h. An FPase1 was purified from the supernatant and its characteristics were analyzed. The molecular weight of the FPase1 was 55 kDa. The optimum pH was pH 7.2 and optimum temperature was 50°C under experiment conditions. Zn(2+) and Co(2+) enhanced the enzyme activity, but Fe(3+) inhibited it.

  15. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  16. Exploration for Agents with Different Personalities in Unknown Environments

    NASA Astrophysics Data System (ADS)

    Doumit, Sarjoun; Minai, Ali

    We present in this paper a personality-based architecture (PA) that combines elements from the subsumption architecture and reinforcement learning to find alternate solutions for problems facing artificial agents exploring unknown environments. The underlying PA algorithm is decomposed into layers according to the different (non-contiguous) stages that our agent passes in, which in turn are influenced by the sources of rewards present in the environment. The cumulative rewards collected by an agent, in addition to its internal composition serve as factors in shaping its personality. In missions where multiple agents are deployed, our solution-goal is to allow each of the agents develop its own distinct personality in order for the collective to reach a balanced society, which then can accumulate the largest possible amount of rewards for the agent and society as well. The architecture is tested in a simulated matrix world which embodies different types of positive rewards and negative rewards. Varying experiments are performed to compare the performance of our algorithm with other algorithms under the same environment conditions. The use of our architecture accelerates the overall adaptation of the agents to their environment and goals by allowing the emergence of an optimal society of agents with different personalities. We believe that our approach achieves much efficient results when compared to other more restrictive policy designs.

  17. Coal-Quality Information - Key to the Efficient and Environmentally Sound Use of Coal

    USGS Publications Warehouse

    Finkleman, Robert B.

    1997-01-01

    The rock that we refer to as coal is derived principally from decomposed organic matter (plants) consisting primarily of the element carbon. When coal is burned, it produces energy in the form of heat, which is used to power machines such as steam engines or to drive turbines that produce electricity. Almost 60 percent of the electricity produced in the United States is derived from coal combustion. Coal is an extraordinarily complex material. In addition to organic matter, coal contains water (up to 40 or more percent by weight for some lignitic coals), oils, gases (such as methane), waxes (used to make shoe polish), and perhaps most importantly, inorganic matter (fig. 1). The inorganic matter--minerals and trace elements--cause many of the health, environmental, and technological problems attributed to coal use (fig. 2). 'Coal quality' is the term used to refer to the properties and characteristics of coal that influence its behavior and use. Among the coal-quality characteristics that will be important for future coal use are the concentrations, distribution, and forms of the many elements contained in the coal that we intend to burn. Knowledge of these quality characteristics in U.S. coal deposits may allow us to use this essential energy resource more efficiently and effectively and with less undesirable environmental impact.

  18. Experimental and Numerical Study of Ammonium Perchlorate Counterflow Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Smooke, M. D.; Yetter, R. A.; Parr, T. P.; Hanson-Parr, D. M.; Tanoff, M. A.

    1999-01-01

    Many solid rocket propellants are based on a composite mixture of ammonium perchlorate (AP) oxidizer and polymeric binder fuels. In these propellants, complex three-dimensional diffusion flame structures between the AP and binder decomposition products, dependent upon the length scales of the heterogeneous mixture, drive the combustion via heat transfer back to the surface. Changing the AP crystal size changes the burn rate of such propellants. Large AP crystals are governed by the cooler AP self-deflagration flame and burn slowly, while small AP crystals are governed more by the hot diffusion flame with the binder and burn faster. This allows control of composite propellant ballistic properties via particle size variation. Previous measurements on these diffusion flames in the planar two-dimensional sandwich configuration yielded insight into controlling flame structure, but there are several drawbacks that make comparison with modeling difficult. First, the flames are two-dimensional and this makes modeling much more complex computationally than with one-dimensional problems, such as RDX self- and laser-supported deflagration. In addition, little is known about the nature, concentration, and evolution rates of the gaseous chemical species produced by the various binders as they decompose. This makes comparison with models quite difficult. Alternatively, counterflow flames provide an excellent geometric configuration within which AP/binder diffusion flames can be studied both experimentally and computationally.

  19. Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.

    PubMed

    Li, Yun; Ho, K C; Popescu, Mihail

    2014-03-01

    Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.

  20. A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.

    2007-04-01

    In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less

  1. Efficient co-conversion process of chicken manure into protein feed and organic fertilizer by Hermetia illucens L. (Diptera: Stratiomyidae) larvae and functional bacteria.

    PubMed

    Xiao, Xiaopeng; Mazza, Lorenzo; Yu, Yongqiang; Cai, Minmin; Zheng, Longyu; Tomberlin, Jeffery K; Yu, Jeffrey; van Huis, Arnold; Yu, Ziniu; Fasulo, Salvatore; Zhang, Jibin

    2018-07-01

    A chicken manure management process was carried out through co-conversion of Hermetia illucens L. larvae (BSFL) with functional bacteria for producing larvae as feed stuff and organic fertilizer. Thirteen days co-conversion of 1000 kg of chicken manure inoculated with one million 6-day-old BSFL and 10 9  CFU Bacillus subtilis BSF-CL produced aging larvae, followed by eleven days of aerobic fermentation inoculated with the decomposing agent to maturity. 93.2 kg of fresh larvae were harvested from the B. subtilis BSF-CL-inoculated group, while the control group only harvested 80.4 kg of fresh larvae. Chicken manure reduction rate of the B. subtilis BSF-CL-inoculated group was 40.5%, while chicken manure reduction rate of the control group was 35.8%. The weight of BSFL increased by 15.9%, BSFL conversion rate increased by 12.7%, and chicken manure reduction rate increased by 13.4% compared to the control (no B. subtilis BSF-CL). The residue inoculated with decomposing agent had higher maturity (germination index >92%), compared with the no decomposing agent group (germination index ∼86%). The activity patterns of different enzymes further indicated that its production was more mature and stable than that of the no decomposing agent group. Physical and chemical production parameters showed that the residue inoculated with the decomposing agent was more suitable for organic fertilizer than the no decomposing agent group. Both, the co-conversion of chicken manure by BSFL with its synergistic bacteria and the aerobic fermentation with the decomposing agent required only 24 days. The results demonstrate that co-conversion process could shorten the processing time of chicken manure compared to traditional compost process. Gut bacteria could enhance manure conversion and manure reduction. We established efficient manure co-conversion process by black soldier fly and bacteria and harvest high value-added larvae mass and biofertilizer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Community structure and estimated contribution of primary consumers (Nematodes and Copepods) of decomposing plant litter (Juncus roemerianus and Rhizophora mangle) in South Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fell, J.W.; Cefalu, R.

    1984-01-01

    The paper discusses the meiofauna associated with decomposing leaf litter from two species of coastal marshland plants: the black needle rush, Juncus roemerianus and the red mangrove, Rhizophora mangle. The following aspects were investigated: (1) types of meiofauna present, especially nematodes; (2) changes in meiofaunal community structures with regard to season, station location, and type of plant litter; (3) amount of nematode and copepod biomass present on the decomposing plant litter; and (4) an estimation of the possible role of the nematodes in the decomposition process. 28 references, 5 figures, 9 tables. (ACR)

  3. Catalytic cartridge SO.sub.3 decomposer

    DOEpatents

    Galloway, Terry R.

    1982-01-01

    A catalytic cartridge internally heated is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube being internally heated. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and being internally heated. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.

  4. A discrimination-association model for decomposing component processes of the implicit association test.

    PubMed

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  5. Thermally Regenerative Battery with Intercalatable Electrodes and Selective Heating Means

    NASA Technical Reports Server (NTRS)

    Sharma, Pramod K. (Inventor); Narayanan, Sekharipuram R. (Inventor); Hickey, Gregory S. (Inventor)

    2000-01-01

    The battery contains at least one electrode such as graphite that intercalates a first species from the electrolyte disposed in a first compartment such as bromine to form a thermally decomposable complex during discharge. The other electrode can also be graphite which supplies another species such as lithium to the electrolyte in a second electrode compartment. The thermally decomposable complex is stable at room temperature but decomposes at elevated temperatures such as 50 C. to 150 C. The electrode compartments are separated by a selective ion permeable membrane that is impermeable to the first species. Charging is effected by selectively heating the first electrode.

  6. Input of easily available organic C and N stimulates microbial decomposition of soil organic matter in arctic permafrost soil

    PubMed Central

    Wild, Birgit; Schnecker, Jörg; Alves, Ricardo J. Eloy; Barsukov, Pavel; Bárta, Jiří; Čapek, Petr; Gentsch, Norman; Gittel, Antje; Guggenberger, Georg; Lashchinskiy, Nikolay; Mikutta, Robert; Rusalimova, Olga; Šantrůčková, Hana; Shibistova, Olga; Urich, Tim; Watzka, Margarete; Zrazhevskaya, Galina; Richter, Andreas

    2014-01-01

    Rising temperatures in the Arctic can affect soil organic matter (SOM) decomposition directly and indirectly, by increasing plant primary production and thus the allocation of plant-derived organic compounds into the soil. Such compounds, for example root exudates or decaying fine roots, are easily available for microorganisms, and can alter the decomposition of older SOM (“priming effect”). We here report on a SOM priming experiment in the active layer of a permafrost soil from the central Siberian Arctic, comparing responses of organic topsoil, mineral subsoil, and cryoturbated subsoil material (i.e., poorly decomposed topsoil material subducted into the subsoil by freeze–thaw processes) to additions of 13C-labeled glucose, cellulose, a mixture of amino acids, and protein (added at levels corresponding to approximately 1% of soil organic carbon). SOM decomposition in the topsoil was barely affected by higher availability of organic compounds, whereas SOM decomposition in both subsoil horizons responded strongly. In the mineral subsoil, SOM decomposition increased by a factor of two to three after any substrate addition (glucose, cellulose, amino acids, protein), suggesting that the microbial decomposer community was limited in energy to break down more complex components of SOM. In the cryoturbated horizon, SOM decomposition increased by a factor of two after addition of amino acids or protein, but was not significantly affected by glucose or cellulose, indicating nitrogen rather than energy limitation. Since the stimulation of SOM decomposition in cryoturbated material was not connected to microbial growth or to a change in microbial community composition, the additional nitrogen was likely invested in the production of extracellular enzymes required for SOM decomposition. Our findings provide a first mechanistic understanding of priming in permafrost soils and suggest that an increase in the availability of organic carbon or nitrogen, e.g., by increased plant productivity, can change the decomposition of SOM stored in deeper layers of permafrost soils, with possible repercussions on the global climate. PMID:25089062

  7. Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51.

    PubMed

    Tóth, Akos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba; Kukolya, József

    2013-07-11

    Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity.

  8. Ecosystem and decomposer effects on litter dynamics along an old field to old-growth forest successional gradient

    EPA Science Inventory

    Identifying the biotic (e.g. decomposers, vegetation) and abiotic (e.g. temperature, moisture) mechanisms controlling litter decomposition is key to understanding ecosystem function, especially where variation in ecosystem structure due to successional processes may alter the str...

  9. [Water-holding characteristics and accumulation amount of the litters under main forest types in Xinglong Mountain of Gansu, Northwest China].

    PubMed

    Wei, Qiang; Ling, Lei; Zhang, Guang-zhong; Yan, Pei-bin; Tao, Ji-xin; Chai, Chun-shan; Xue, Rui

    2011-10-01

    By the methods of field survey and laboratory soaking extraction, an investigation was conducted on the accumulation amount, water-holding capacity, water-holding rate, and water-absorption rate of the litters under six main forests (Picea wilsonii forest, P. wilsonii - Betula platyphlla forest, Populus davidiana - B. platyphlla forest, Cotonester multiglorus - Rosa xanthina shrubs, Pinus tabulaeformis forest, and Larix principis-rupprechtii forest) in Xinglong Mountain of Gansu. The accumulation amount of the litters under the forests was 13.40-46.32 t hm(-2), and in the order of P. tabulaeformis forest > P. wilsonii - B. platyphlla forest > L. principis-rupprechtii forest > P. wilsonii forest > C. multiglorus-R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litter storage of coniferous forests was greater than that of broadleaved forests, and the storage percentage of semi-decomposed litters was all higher than that of un-decomposed litters. The maximum water-holding rate of the litters was 185.5%-303.6%, being the highest for L. principis-rupprechtii forest and the lowest for P. tabulaeformis forest. The litters' water-holding capacity changed logarithmically with their soaking time. For coniferous forests, un-decomposed litters had a lower water-holding rate than semi-decomposed litters; whereas for broadleaved forests, it was in adverse. The maximum water-holding capacity of the litters varied from 3.94 mm to 8.59 mm, and was in the order of P. tabulaeformis forest > L. principis-rupprechtii forest > P. wilsonii - B. platyphlla forest > P. wilsonii forest > C. multiglorus - R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litters' water-holding capacity also changed logarithmically with immersing time, and the half-decomposed litters had a larger water-holding capacity than un-decomposed litters. The water-absorption rate of the litters presented a power function with immersing time. Within the first one hour of immersed in water, the water-absorption rate of the litters declined linearly; after the first one hour, the litters' water-absorption rate became smaller, and changed slowly at different immersed stages. Semi-decomposed litters had a higher water-absorption rate than un-decomposed litters. The effective retaining amount (depth) of the litters was in the order of P. wilsonii - B. platyphlla forest (5.97 mm) > P. tabulaeformis forest (5.59 mm) > L. principis-rupprechtii forest (5.46 mm) >P. wilsonii forest (4.30 mm) > C. multiglorus - R. xanthina shrubs (3.03 mm)>P. davidiana - B. platyphlla forest (2.13 mm).

  10. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  11. Shock-wave flow regimes at entry into the diffuser of a hypersonic ramjet engine: Influence of physical properties of the gas medium

    NASA Astrophysics Data System (ADS)

    Tarnavskii, G. A.

    2006-07-01

    The physical aspects of the effective-adiabatic-exponent model making it possible to decompose the total problem on modeling of high-velocity gas flows into individual subproblems (“physicochemical processes” and “ aeromechanics”), which ensures the creation of a universal and efficient computer complex divided into a number of independent units, have been analyzed. Shock-wave structures appearing at entry into the duct of a hypersonic aircraft have been investigated based on this methodology, and the influence of the physical properties of the gas medium in a wide range of variations of the effective adiabatic exponent has been studied.

  12. Registering Cortical Surfaces Based on Whole-Brain Structural Connectivity and Continuous Connectivity Analysis

    PubMed Central

    Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul

    2014-01-01

    We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795

  13. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  14. Attitude control of the space construction base: A modular approach

    NASA Technical Reports Server (NTRS)

    Oconnor, D. A.

    1982-01-01

    A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.

  15. The Design Manager's Aid for Intelligent Decomposition (DeMAID)

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1994-01-01

    Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.

  16. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    NASA Astrophysics Data System (ADS)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  17. A comparison of algorithms for inference and learning in probabilistic graphical models.

    PubMed

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  18. Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51

    PubMed Central

    Tóth, Ákos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba

    2013-01-01

    Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity. PMID:23846276

  19. Decomposing University Grades: A Longitudinal Study of Students and Their Instructors

    ERIC Educational Resources Information Center

    Beenstock, Michael; Feldman, Dan

    2018-01-01

    First-degree course grades for a cohort of social science students are matched to their instructors, and are statistically decomposed into departmental, course, instructor, and student components. Student ability is measured alternatively by university acceptance scores, or by fixed effects estimated using panel data methods. After controlling for…

  20. Decomposing Achievement Gaps among OECD Countries

    ERIC Educational Resources Information Center

    Zhang, Liang; Lee, Kristen A.

    2011-01-01

    In this study, we use decomposition methods on PISA 2006 data to compare student academic performance across OECD countries. We first establish an empirical model to explain the variation in academic performance across individuals, and then use the Oaxaca-Blinder decomposition method to decompose the achievement gap between each of the OECD…

  1. Amorphous Silica Based Nanomedicine with Safe Carrier Excretion and Enhanced Drug Efficacy

    NASA Astrophysics Data System (ADS)

    Zhang, Silu

    With recent development of nanoscience and nanotechnology, a great amount of efforts have been devoted to nanomedicine development. Among various nanomaterials, silica nanoparticle (NP) is generally accepted as non-toxic, and can provide a versatile platform for drug loading. In addition, the surface of the silica NP is hydrophilic, being favorable for cellular uptake. Therefore, it is considered as one of the most promising candidates to serve as carriers for drugs. The present thesis mainly focuses on the design of silica based nanocarrier-drug systems, aiming at achieving safe nanocarrier excretion from the biological system and enhanced drug efficacy, which two are considered as most important issues in nanomedicine development. To address the safe carrier excretion issue, we have developed a special type of selfdecomposable SiO2-drug composite NPs. By creating a radial concentration gradient of drug in the NP, the drug release occurred simultaneously with the silica carrier decomposition. Such unique characteristic was different from the conventional dense SiO2-drug NP, in which drug was uniformly distributed and can hardly escape the carrier. We found that the controllable release of the drug was primarily determined by diffusion, which was caused by the radial drug concentration gradient in the NP. Escape of the drug molecules then triggered the silica carrier decomposition, which started from the center of the NP and eventually led to its complete fragmentation. The small size of the final carrier fragments enabled their easy excretion via renal systems. Apart from the feature of safe carrier excretion, we also found the controlled release of drugs contribute significantly to the drug efficacy enhancement. By loading an anticancer drug doxorubicin (Dox) to the decomposable SiO 2-methylene blue (MB) NPs, we achieved a self-decomposable SiO 2(MB)-Dox nanomedicine. The gradual escape of drug molecules from NPs and their enabled cytosolic release by optical switch, led to not only high but also stable drug concentration in cytosol within a sustained period. This resulted in enhanced drug efficacy, which is especially manifested in multidrug resistant (MDR) cancer cells, due to the fact that the NP-carrier drug can efficiently bypass the efflux mechanisms and increase drug availability. Together with its feature of spontaneous carrier decomposition and safe excretion, this type of nanomedicine's high drug efficacy highlights its potential for low dose anticancer drug treatment and reduced adverse effect to biological system, holding great promise for clinical translation. The enhanced drug efficacy by employing the self-decomposable silica nanocarrier is also demonstrated in photodynamic therapy (PDT). The loose and fragmentable features of the self-decomposable SiO2-photosensitizer (PS) NPs promoted the outdiffusion of the generated ROS, which resulted in a higher efficacy than that of dense SiO2-PS NPs. On the other hand, we also explored another nanocarrier configuration of Au nanorods decorated SiO2 NP, with PS drug embedded into dense SiO2 matrix. A different mechanism of drug efficacy enhancement was presented as the Au's surface plasmon resonance enhanced the ROS production. Although the drug efficacy of such SiO2(PS)-Au NPs was similar to that of self-decomposable SiO2-PS NPs, their potential for clinical applications was limited without the feature of safe carrier excretion. In summary, the self-decomposable SiO2 based NP developed is a most promising system to serve as safe and effective carriers for drugs. Together with the known biocompatibility of silica, the feature of controllable drug release and simultaneous carrier decomposition achieved in the self-decomposable SiO2-drug NPs make it ideal for a wide range of therapeutic applications.

  2. Photocatalytic degradation of commercial phoxim over La-doped TiO2 nanoparticles in aqueous suspension.

    PubMed

    Dai, Ke; Peng, Tianyou; Chen, Hao; Liu, Juan; Zan, Lin

    2009-03-01

    Photocatalytic degradation of commercial phoxim emulsion in aqueous suspension was investigated by using La-doped mesoporous TiO2 nanoparticles (m-TiO2) as the photocatalyst under UV irradiation. Effects of La-doping level, calcination temperature, and additional amount of the photocatalyst on the photocatalytic degradation efficiency were investigated in detail. Experimental results indicate that 20 mg L(-1) phoxim in 0.5 g L(-1) La/m-TiO2 suspension (the initial pH 4.43) can be decomposed as prolonging the irradiation time. Almost 100% phoxim was decomposed after 4 h irradiation according to the spectrophotometric analyses, whereas the mineralization rate of phoxim just reached ca. 80% as checked by ion chromatography (IC) analyses. The elimination of the organic solvent in the phoxim emulsion as well as the formation and decomposition of some degradation intermediates were observed by high-performance liquid chromatography-mass spectroscopy (HPLC-MS). On the basis of the analysis results on the photocatalytic degradation intermediates, two possible photocatalytic degradation pathways are proposed under the present experimental conditions, which reveal that both the hydrolysis and adsorption of phoxim under UV light irradiation play important roles during the photocatalytic degradation of phoxim.

  3. Studies on redox H 2-CO 2 cycle on CoCr xFe 2- xO 4

    NASA Astrophysics Data System (ADS)

    Ma, Ling Juan; Chen, Lin Shen; Chen, Song Ying

    2009-01-01

    Completely reduced CoCr xFe 2-xO 4 can be used to decompose CO 2. It was found that for pure CoFe 2O 4 there is no FeO formation in the first step while there is formation in the second step. For CoCr 0.08Fe 2-0.08O 4, there is no FeO formed in all the oxidation process, because of effect of Cr 3+. Pure CoFe 2O 4 was destroyed at the first reaction cycle of H 2 reduction and CO 2 oxidation, while doped Cr 3+ spinel CoCr 0.08Fe 1.92O 4 showed good stability. The results from H 2-TG, CO 2-TG and XRD show that the addition of Cr 3+ to CoFe 2O 4 can inhibit the increasing of crystallite size and the sintering of alloy. Most importantly, the CoCr 0.08Fe 1.92O 4 can be used to decompose CO 2 repeatedly, implying that it is a potential catalyst for dealing with the CO 2 as a 'green house effect' gas.

  4. Arrowheaded enhanced multivariance products representation for matrices (AEMPRM): Specifically focusing on infinite matrices and converting arrowheadedness to tridiagonality

    NASA Astrophysics Data System (ADS)

    Özdemir, Gizem; Demiralp, Metin

    2015-12-01

    In this work, Enhanced Multivariance Products Representation (EMPR) approach which is a Demiralp-and-his- group extension to the Sobol's High Dimensional Model Representation (HDMR) has been used as the basic tool. Their discrete form have also been developed and used in practice by Demiralp and his group in addition to some other authors for the decomposition of the arrays like vectors, matrices, or multiway arrays. This work specifically focuses on the decomposition of infinite matrices involving denumerable infinitely many rows and columns. To this end the target matrix is first decomposed to the sum of certain outer products and then each outer product is treated by Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) which has been developed by Demiralp and his group. The result is a three-matrix- factor-product whose kernel (the middle factor) is an arrowheaded matrix while the pre and post factors are invertable matrices decomposed of the support vectors of TMEMPR. This new method is called as Arrowheaded Enhanced Multivariance Products Representation for Matrices. The general purpose is approximation of denumerably infinite matrices with the new method.

  5. Developing CORBA-Based Distributed Scientific Applications From Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche; Kim, Chan; Lopez, Isaac

    2000-01-01

    An efficient methodology is presented for integrating legacy applications written in Fortran into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into Common Object Request Broker Architecture (CORBA) objects are discussed. Fortran codes are modified as little as possible as they are decomposed into modules and wrapped as objects. A new conversion tool takes the Fortran application as input and generates the C/C++ header file and Interface Definition Language (IDL) file. In addition, the performance of the client server computing is evaluated.

  6. Decomposition of aquatic plants in lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godshalk, G.L.

    1977-01-01

    This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.

  7. Populations of Pratylenchus penetrans Relative to Decomposing Nitrogenous Soil Amendments

    PubMed Central

    Walker, J. T.

    1971-01-01

    Populations of Pratylenchus penetrans decreased in soil following addition of 70 and 700 ppm N in the form of nitrate, nitrite, organic nitrogen, or ammonium compounds. Nitrate was less effective than other nitrogen carriers. Population reduction is principally attributed to ammonification during decomposition. This hypothesis is supported by chromatographic analyses of soil atmospheres, survival of nematodes in pure CO₂ and N₂, inverse relationship of CO₂, content in amended soils to nematode populations, and direct relationship of NH₃-N content of amended soils to nematode populations. PMID:19322339

  8. Biochar-carrying hydrocarbon decomposers promote degradation during the early stage of bioremediation

    NASA Astrophysics Data System (ADS)

    Galitskaya, Polina; Akhmetzyanova, Leisan; Selivanovskaya, Svetlana

    2016-10-01

    Oil pollution is one of the most serious current environmental problems. In this study, four strategies of bioremediation of oil-polluted soil were tested in the laboratory over a period of 84 days: (A) aeration and moistening; (B) amendment with 1 % biochar (w ⁄ w) in combination with A; amendment with 1 % biochar with immobilized Pseudomonas aeruginosa (C) or Acinetobacter radioresistens (D) in combination with A. All strategies used resulted in a decrease of the hydrocarbon content, while biochar addition (B, C, D strategies) led to acceleration of decomposition in the beginning. Microbial biomass and respiration rate increased significantly at the start of bioremediation. It was demonstrated that moistening and aeration were the main factors influencing microbial biomass, while implementation of biochar and introduction of microbes were the main factors influencing microbial respiration. All four remediation strategies altered bacterial community structure and phytotoxicity. The Illumina MiSeq method revealed 391 unique operational taxonomic units (OTUs) belonging to 40 bacterial phyla and a domination of Proteobacteria in all investigated soil samples. The lowest alpha diversity was observed in the samples with introduced bacteria on the first day of remediation. Metric multidimensional scaling demonstrated that in the beginning and at the end, microbial community structures were more similar than those on the 28th day of remediation. Strategies A and B decreased phytotoxicity of remediated soil between 2.5 and 3.1 times as compared with untreated soil. C and D strategies led to additional decrease of phytotoxicity between 2.1 and 3.2 times.

  9. Multiple soil nutrient competition between plants, microbes, and mineral surfaces: model development, parameterization, and example applications in several tropical forests

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Riley, W. J.; Tang, J.; Koven, C. D.

    2015-03-01

    Soil is a complex system where biotic (e.g., plant roots, micro-organisms) and abiotic (e.g., mineral surfaces) consumers compete for resources necessary for life (e.g., nitrogen, phosphorus). This competition is ecologically significant, since it regulates the dynamics of soil nutrients and controls aboveground plant productivity. Here we develop, calibrate, and test a nutrient competition model that accounts for multiple soil nutrients interacting with multiple biotic and abiotic consumers. As applied here for tropical forests, the Nutrient COMpetition model (N-COM) includes three primary soil nutrients (NH4+, NO3-, and POx (representing the sum of PO43-, HPO42-, and H2PO4-)) and five potential competitors (plant roots, decomposing microbes, nitrifiers, denitrifiers, and mineral surfaces). The competition is formulated with a quasi-steady-state chemical equilibrium approximation to account for substrate (multiple substrates share one consumer) and consumer (multiple consumers compete for one substrate) effects. N-COM successfully reproduced observed soil heterotrophic respiration, N2O emissions, free phosphorus, sorbed phosphorus, and free NH4+ at a tropical forest site (Tapajos). The overall model posterior uncertainty was moderately well constrained. Our sensitivity analysis revealed that soil nutrient competition was primarily regulated by consumer-substrate affinity rather than environmental factors such as soil temperature or soil moisture. Our results imply that the competitiveness (from most to least competitive) followed this order: (1) for NH4+, nitrifiers ~ decomposing microbes > plant roots, (2) for NO3-, denitrifiers ~ decomposing microbes > plant roots, (3) for POx, mineral surfaces > decomposing microbes ~ plant roots. Although smaller, plant relative competitiveness is of the same order of magnitude as microbes. We then applied the N-COM model to analyze field nitrogen and phosphorus perturbation experiments in two tropical forest sites (in Hawaii and Puerto Rico) not used in model development or calibration. Under soil inorganic nitrogen and phosphorus elevated conditions, the model accurately replicated the experimentally observed competition among different nutrient consumers. Although we used as many observations as we could obtain, more nutrient addition experiments in tropical systems would greatly benefit model testing and calibration. In summary, the N-COM model provides an ecologically consistent representation of nutrient competition appropriate for land BGC models integrated in Earth System Models.

  10. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  11. Methods for assessing the impact of avermectins on the decomposer community of sheep pastures.

    PubMed

    King, K L

    1993-06-01

    This paper outlines methods which can be used in the field assessment of potentially toxic chemicals such as the avermectins. The procedures focus on measuring the effects of the drug on decomposer organisms and the nutrient cycling process in pastures grazed by sheep. Measurements of decomposer activity are described along with methods for determining dry and organic matter loss and mineral loss from dung to the underlying soil. Sampling methods for both micro- and macro-invertebrates are discussed along with determination of the percentage infection of plant roots with vesicular-arbuscular mycorrhizal fungi. An integrated sampling unit for assessing the ecotoxicity of ivermectin in pastures grazed by sheep is presented.

  12. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George C. (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1982-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fluidized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  13. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1984-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fludized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  14. Quantitative local analysis of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Topcu, Ufuk

    This thesis investigates quantitative methods for local robustness and performance analysis of nonlinear dynamical systems with polynomial vector fields. We propose measures to quantify systems' robustness against uncertainties in initial conditions (regions-of-attraction) and external disturbances (local reachability/gain analysis). S-procedure and sum-of-squares relaxations are used to translate Lyapunov-type characterizations to sum-of-squares optimization problems. These problems are typically bilinear/nonconvex (due to local analysis rather than global) and their size grows rapidly with state/uncertainty space dimension. Our approach is based on exploiting system theoretic interpretations of these optimization problems to reduce their complexity. We propose a methodology incorporating simulation data in formal proof construction enabling more reliable and efficient search for robustness and performance certificates compared to the direct use of general purpose solvers. This technique is adapted both to region-of-attraction and reachability analysis. We extend the analysis to uncertain systems by taking an intentionally simplistic and potentially conservative route, namely employing parameter-independent rather than parameter-dependent certificates. The conservatism is simply reduced by a branch-and-hound type refinement procedure. The main thrust of these methods is their suitability for parallel computing achieved by decomposing otherwise challenging problems into relatively tractable smaller ones. We demonstrate proposed methods on several small/medium size examples in each chapter and apply each method to a benchmark example with an uncertain short period pitch axis model of an aircraft. Additional practical issues leading to a more rigorous basis for the proposed methodology as well as promising further research topics are also addressed. We show that stability of linearized dynamics is not only necessary but also sufficient for the feasibility of the formulations in region-of-attraction analysis. Furthermore, we generalize an upper bound refinement procedure in local reachability/gain analysis which effectively generates non-polynomial certificates from polynomial ones. Finally, broader applicability of optimization-based tools stringently depends on the availability of scalable/hierarchial algorithms. As an initial step toward this direction, we propose a local small-gain theorem and apply to stability region analysis in the presence of unmodeled dynamics.

  15. A method for joint routing, wavelength dimensioning and fault tolerance for any set of simultaneous failures on dynamic WDM optical networks

    NASA Astrophysics Data System (ADS)

    Jara, Nicolás; Vallejos, Reinaldo; Rubino, Gerardo

    2017-11-01

    The design of optical networks decomposes into different tasks, where the engineers must basically organize the way the main system's resources are used, minimizing the design and operation costs and respecting critical performance constraints. More specifically, network operators face the challenge of solving routing and wavelength dimensioning problems while aiming to simultaneously minimize the network cost and to ensure that the network performance meets the level established in the Service Level Agreement (SLA). We call this the Routing and Wavelength Dimensioning (R&WD) problem. Another important problem to be solved is how to deal with failures of links when the network is operating. When at least one link fails, a high rate of data loss may occur. To avoid it, the network must be designed in such a manner that upon one or multiple failures, the affected connections can still communicate using alternative routes, a mechanism known as Fault Tolerance (FT). When the mechanism allows to deal with an arbitrary number of faults, we speak about Multiple Fault Tolerance (MFT). The different tasks before mentioned are usually solved separately, or in some cases by pairs, leading to solutions that are not necessarily close to optimal ones. This paper proposes a novel method to simultaneously solve all of them, that is, the Routing, the Wavelength Dimensioning, and the Multiple Fault Tolerance problems. The method allows to obtain: a) all the primary routes by which each connection normally transmits its information, b) the additional routes, called secondary routes, used to keep each user connected in cases where one or more simultaneous failures occur, and c) the number of wavelengths available at each link of the network, calculated such that the blocking probability of each connection is lower than a pre-determined threshold (which is a network design parameter), despite the occurrence of simultaneous link failures. The solution obtained by the new algorithm is significantly more efficient than current methods, its implementation is notably simple and its on-line operation is very fast. In the paper, different examples illustrate the results provided by the proposed technique.

  16. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  17. DOMAIN DECOMPOSITION METHOD APPLIED TO A FLOW PROBLEM Norberto C. Vera Guzmán Institute of Geophysics, UNAM

    NASA Astrophysics Data System (ADS)

    Vera, N. C.; GMMC

    2013-05-01

    In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.

  18. The behavior of plasma with an arbitrary degree of degeneracy of electron gas in the conductive layer

    NASA Astrophysics Data System (ADS)

    Latyshev, A. V.; Gordeeva, N. M.

    2017-09-01

    We obtain an analytic solution of the boundary problem for the behavior (fluctuations) of an electron plasma with an arbitrary degree of degeneracy of the electron gas in the conductive layer in an external electric field. We use the kinetic Vlasov-Boltzmann equation with the Bhatnagar-Gross-Krook collision integral and the Maxwell equation for the electric field. We use the mirror boundary conditions for the reflections of electrons from the layer boundary. The boundary problem reduces to a one-dimensional problem with a single velocity. For this, we use the method of consecutive approximations, linearization of the equations with respect to the absolute distribution of the Fermi-Dirac electrons, and the conservation law for the number of particles. Separation of variables then helps reduce the problem equations to a characteristic system of equations. In the space of generalized functions, we find the eigensolutions of the initial system, which correspond to the continuous spectrum (Van Kampen mode). Solving the dispersion equation, we then find the eigensolutions corresponding to the adjoint and discrete spectra (Drude and Debye modes). We then construct the general solution of the boundary problem by decomposing it into the eigensolutions. The coefficients of the decomposition are given by the boundary conditions. This allows obtaining the decompositions of the distribution function and the electric field in explicit form.

  19. High Penetration of Electrical Vehicles in Microgrids: Threats and Opportunities

    NASA Astrophysics Data System (ADS)

    Khederzadeh, Mojtaba; Khalili, Mohammad

    2014-10-01

    Given that the microgrid concept is the building block of future electric distribution systems and electrical vehicles (EVs) are the future of transportation market, in this paper, the impact of EVs on the performance of microgrids is investigated. Demand-side participation is used to cope with increasing demand for EV charging. The problem of coordination of EV charging and discharging (with vehicle-to-grid (V2G) functionality) and demand response is formulated as a market-clearing mechanism that accepts bids from the demand and supply sides and takes into account the constraints put forward by different parts. Therefore, a day-ahead market with detailed bids and offers within the microgrid is designed whose objective is to maximize the social welfare which is the difference between the value that consumers attach to the electrical energy they buy plus the benefit of the EV owners participating in the V2G functionality and the cost of producing/purchasing this energy. As the optimization problem is a mixed integer nonlinear programming one, it is decomposed into one master problem for energy scheduling and one subproblem for power flow computation. The two problems are solved iteratively by interfacing MATLAB with GAMS. Simulation results on a sample microgrid with different residential, commercial and industrial consumers with associated demand-side biddings and different penetration level of EVs support the proposed formulation of the problem and the applied methods.

  20. Thermal energy storage to minimize cost and improve efficiency of a polygeneration district energy system in a real-time electricity market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.

    2016-10-01

    District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less

  1. A Tale of Three Classes: Case Studies in Course Complexity

    ERIC Educational Resources Information Center

    Gill, T. Grandon; Jones, Joni

    2010-01-01

    This paper examines the question of decomposability versus complexity of teaching situations by presenting three case studies of MIS courses. Because all three courses were highly successful in their observed outcomes, the paper hypothesizes that if the attributes of effective course design are decomposable, one would expect to see a large number…

  2. Potassium cuprate (3)

    NASA Technical Reports Server (NTRS)

    Wahl, Kurt; Klemm, Wilhelm

    1988-01-01

    The reaction of KO2 and CuO in an O2 atmosphere at 400 to 450 C results in KCuO, which is a steel-blue and nonmagnetic compound. This substance exhibits a characteristic X-ray diagram; it decomposes in dilute acids to form O2 and Cu(II) salts. It decomposes thermally above 500 C.

  3. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    USDA-ARS?s Scientific Manuscript database

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  4. Kill the Song--Steal the Show: What Does Distinguish Predicative Metaphors from Decomposable Idioms?

    ERIC Educational Resources Information Center

    Caillies, Stephanie; Declercq, Christelle

    2011-01-01

    This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing…

  5. A review of bacterial interactions with blow flies (Diptera: Calliphoridae) of medical, veterinary, and forensic importance

    USDA-ARS?s Scientific Manuscript database

    Blow flies are commonly associated with decomposing material. In most cases, the larvae are found feeding on decomposing vertebrate remains. However, some species have specialized to feed on living tissue or can survive on other alternate resources like feces. Because of their affiliation with su...

  6. When microbes and consumers determine the limiting nutrient of autotrophs: a theoretical analysis

    PubMed Central

    Cherif, Mehdi; Loreau, Michel

    2008-01-01

    Ecological stoichiometry postulates that differential nutrient recycling of elements such as nitrogen and phosphorus by consumers can shift the element that limits plant growth. However, this hypothesis has so far considered the effect of consumers, mostly herbivores, out of their food-web context. Microbial decomposers are important components of food webs, and might prove as important as consumers in changing the availability of elements for plants. In this theoretical study, we investigate how decomposers determine the nutrient that limits plants, both by feeding on nutrients and organic carbon released by plants and consumers, and by being fed upon by omnivorous consumers. We show that decomposers can greatly alter the relative availability of nutrients for plants. The type of limiting nutrient promoted by decomposers depends on their own elemental composition and, when applicable, on their ingestion by consumers. Our results highlight the limitations of previous stoichiometric theories of plant nutrient limitation control, which often ignored trophic levels other than plants and herbivores. They also suggest that detrital chains play an important role in determining plant nutrient limitation in many ecosystems. PMID:18854301

  7. Planification de la maintenance d'un parc de turbines-alternateurs par programmation mathematique

    NASA Astrophysics Data System (ADS)

    Aoudjit, Hakim

    A growing number of Hydro-Quebec's hydro generators are at the end of their useful life and maintenance managers fear to face a number of overhauls exceeding what can be handled. Maintenance crews and budgets are limited and these withdrawals may take up to a full year and mobilize significant resources in addition to the loss of electricity production. In addition, increased export sales forecasts and severe production patterns are expected to speed up wear that can lead to halting many units at the same time. Currently, expert judgment is at the heart of withdrawals which rely primarily on periodic inspections and in-situ measurements and the results are sent to the maintenance planning team who coordinate all the withdrawals decisions. The degradations phenomena taking place is random in nature and the prediction capability of wear using only inspections is limited to short-term at best. A long term planning of major overhauls is sought by managers for the sake of justifying and rationalizing budgets and resources. The maintenance managers are able to provide a huge amount of data. Among them, is the hourly production of each unit for several years, the repairs history on each part of a unit as well as major withdrawals since the 1950's. In this research, we tackle the problem of long term maintenance planning for a fleet of 90 hydro generators at Hydro-Quebec over a 50 years planning horizon period. We lay a scientific and rational framework to support withdrawals decisions by using part of the available data and maintenance history while fulfilling a set of technical and economic constraints. We propose a planning approach based on a constrained optimization framework. We begin by decomposing and sorting hydro generator components to highlight the most influential parts. A failure rate model is developed to take into account the technical characteristics and unit utilization. Then, replacement and repair policies are evaluated for each of the components then strategies are derived for the whole unit. Traditional univariate policies such as the age replacement policy and the minimal repair policy are calculated. These policies are extended to build alternative bivariate maintenance policy as well as a repair strategy where the state of a component after a repair is rejuvenated by a constant coefficient. These templates form the basis for the calculation of objective function for the scheduling problem. On one hand, this issue is treated as a nonlinear problem where the objective is to minimize the average total maintenance cost per unit of time on an infinite horizon for the fleet with technical and economic constraints. A formulation is also proposed in the case of a finite time horizon. In the event of electricity production variation, and given that the usage profile is known, the influence of production scenarios is reflected on the unit's components through their failure rate. In this context, prognoses on possible resources problems are made by studying the characteristics of the generated plans. On the second hand, the withdrawals are now subjected to two decision criteria. In addition to minimizing the average total maintenance cost per unit of time on an infinite time horizon, the best achievable reliability of remaining turbo generators is sought. This problem is treated as a biobjective nonlinear optimization problem. Finally a series of problems describing multiple contexts are solved for planning renovations of 90 turbo generators units considering 3 major components in each unit and 2 types of maintenance policies for each component.

  8. Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea

    NASA Astrophysics Data System (ADS)

    Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju

    2014-08-01

    A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.

  9. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  10. Analysis of Drop Oscillations Excited by an Electrical Point Force in AC EWOD

    NASA Astrophysics Data System (ADS)

    Oh, Jung Min; Ko, Sung Hee; Kang, Kwan Hyoung

    2008-03-01

    Recently, a few researchers have reported the oscillation of a sessile drop in AC EWOD (electrowetting on dielectrics), and some of its consequences. The drop oscillation problem in AC EWOD is associated with various applications based on electrowetting such as LOC (lab-on-a-chip), liquid lens, and electronic display. However, no theoretical analysis of the problem has been attempted yet. In the present paper, we propose a theoretical model to analyze the oscillation by applying the conventional method to analyze the drop oscillation. The domain perturbation method is used to derive the shape mode equations under the assumptions of weak viscous flow and small deformation. The Maxwell stress is exerted on the three-phase contact line of the droplet like a point force. The force is regarded as a delta function, and is decomposed into the driving forces of each shape mode. The theoretical results on the shape and the frequency responses are compared with experiments, which shows a qualitative agreement.

  11. Wave chaos in the elastic disk.

    PubMed

    Sondergaard, Niels; Tanner, Gregor

    2002-12-01

    The relation between the elastic wave equation for plane, isotropic bodies and an underlying classical ray dynamics is investigated. We study, in particular, the eigenfrequencies of an elastic disk with free boundaries and their connection to periodic rays inside the circular domain. Even though the problem is separable, wave mixing between the shear and pressure component of the wave field at the boundary leads to an effective stochastic part in the ray dynamics. This introduces phenomena typically associated with classical chaos as, for example, an exponential increase in the number of periodic orbits. Classically, the problem can be decomposed into an integrable part and a simple binary Markov process. Similarly, the wave equation can, in the high-frequency limit, be mapped onto a quantum graph. Implications of this result for the level statistics are discussed. Furthermore, a periodic trace formula is derived from the scattering matrix based on the inside-outside duality between eigenmodes and scattering solutions and periodic orbits are identified by Fourier transforming the spectral density.

  12. Designing and optimizing a healthcare kiosk for the community.

    PubMed

    Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji

    2015-03-01

    Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  14. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  15. A Combined Adaptive Neural Network and Nonlinear Model Predictive Control for Multirate Networked Industrial Process Control.

    PubMed

    Wang, Tong; Gao, Huijun; Qiu, Jianbin

    2016-02-01

    This paper investigates the multirate networked industrial process control problem in double-layer architecture. First, the output tracking problem for sampled-data nonlinear plant at device layer with sampling period T(d) is investigated using adaptive neural network (NN) control, and it is shown that the outputs of subsystems at device layer can track the decomposed setpoints. Then, the outputs and inputs of the device layer subsystems are sampled with sampling period T(u) at operation layer to form the index prediction, which is used to predict the overall performance index at lower frequency. Radial basis function NN is utilized as the prediction function due to its approximation ability. Then, considering the dynamics of the overall closed-loop system, nonlinear model predictive control method is proposed to guarantee the system stability and compensate the network-induced delays and packet dropouts. Finally, a continuous stirred tank reactor system is given in the simulation part to demonstrate the effectiveness of the proposed method.

  16. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  17. An integral equation formulation for the diffraction from convex plates and polyhedra.

    PubMed

    Asheim, Andreas; Svensson, U Peter

    2013-06-01

    A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.

  18. A quantification method for heat-decomposable methylglyoxal oligomers and its application on 1,3,5-trimethylbenzene SOA

    NASA Astrophysics Data System (ADS)

    Rodigast, Maria; Mutzel, Anke; Herrmann, Hartmut

    2017-03-01

    Methylglyoxal forms oligomeric compounds in the atmospheric aqueous particle phase, which could establish a significant contribution to the formation of aqueous secondary organic aerosol (aqSOA). Thus far, no suitable method for the quantification of methylglyoxal oligomers is available despite the great effort spent for structure elucidation. In the present study a simplified method was developed to quantify heat-decomposable methylglyoxal oligomers as a sum parameter. The method is based on the thermal decomposition of oligomers into methylglyoxal monomers. Formed methylglyoxal monomers were detected using PFBHA (o-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride) derivatisation and gas chromatography-mass spectrometry (GC/MS) analysis. The method development was focused on the heating time (varied between 15 and 48 h), pH during the heating process (pH = 1-7), and heating temperature (50, 100 °C). The optimised values of these method parameters are presented. The developed method was applied to quantify heat-decomposable methylglyoxal oligomers formed during the OH-radical oxidation of 1,3,5-trimethylbenzene (TMB) in the Leipzig aerosol chamber (LEipziger AerosolKammer, LEAK). Oligomer formation was investigated as a function of seed particle acidity and relative humidity. A fraction of heat-decomposable methylglyoxal oligomers of up to 8 % in the produced organic particle mass was found, highlighting the importance of those oligomers formed solely by methylglyoxal for SOA formation. Overall, the present study provides a new and suitable method for quantification of heat-decomposable methylglyoxal oligomers in the aqueous particle phase.

  19. Application of wavelet multi-resolution analysis for correction of seismic acceleration records

    NASA Astrophysics Data System (ADS)

    Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi

    2007-12-01

    During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.

  20. Deconstructing sub-clinical psychosis into latent-state and trait variables over a 30-year time span.

    PubMed

    Rössler, Wulf; Hengartner, Michael P; Ajdacic-Gross, Vladeta; Haker, Helene; Angst, Jules

    2013-10-01

    Our aim was to deconstruct the variance underlying the expression of sub-clinical psychosis symptoms into portions associated with latent time-dependent states and time-invariant traits. We analyzed data of 335 subjects from the general population of Zurich, Switzerland, who had been repeatedly measured between 1979 (age 20/21) and 2008 (age 49/50). We applied two measures of sub-clinical psychosis derived from the SCL-90-R, namely schizotypal signs (STS) and schizophrenia nuclear symptoms (SNS). Variance was decomposed with latent state-trait analysis and associations with covariates were examined with generalized linear models. At ages 19/20 and 49/50, the latent states underlying STS accounted for 48% and 51% of variance, whereas for SNS those estimates were 62% and 50%. Between those age classes, however, expression of sub-clinical psychosis was strongly associated with stable traits (75% and 89% of total variance in STS and SNS, respectively, at age 27/28). Latent states underlying variance in STS and SNS were particularly related to partnership problems over almost the entire observation period. STS was additionally related to employment problems, whereas drug-use was a strong predictor of states underlying both syndromes at age 19/20. The latent trait underlying expression of STS and SNS was particularly related to low sense of mastery and self-esteem and to high depressiveness. Although most psychosis symptoms are transient and episodic in nature, the variability in their expression is predominantly caused by stable traits. Those time-invariant and rather consistent effects are particularly influential around age 30, whereas the occasion-specific states appear to be particularly influential at ages 20 and 50. © 2013.

  1. Reconfigurable Model Execution in the OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Hwang, John T.

    2017-01-01

    NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.

  2. Anomaly detection for medical images based on a one-class classification

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Ren, Yinhao; Hou, Rui; Shi, Bibo; Lo, Joseph Y.; Carin, Lawrence

    2018-02-01

    Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84. The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.

  3. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  4. Management intensity alters decomposition via biological pathways

    USGS Publications Warehouse

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future efforts to more accurately predict soil carbon dynamics under different management regimes may need to explicitly consider how changes in litter chemistry during decomposition are influenced by the specific metabolic capabilities of the extant decomposer communities.

  5. Synthesis, Characterization, and Processing of Copper, Indium, and Gallium Dithiocarbamates for Energy Conversion Applications

    NASA Technical Reports Server (NTRS)

    Duraj, S. A.; Duffy, N. V.; Hepp, A. F.; Cowen, J. E.; Hoops, M. D.; Brothrs, S. M.; Baird, M. J.; Fanwick, P. E.; Harris, J. D.; Jin, M. H.-C.

    2009-01-01

    Ten dithiocarbamate complexes of indium(III) and gallium(III) have been prepared and characterized by elemental analysis, infrared spectra and melting point. Each complex was decomposed thermally and its decomposition products separated and identified with the combination of gas chromatography/mass spectrometry. Their potential utility as photovoltaic materials precursors was assessed. Bis(dibenzyldithiocarbamato)- and bis(diethyldithiocarbamato)copper(II), Cu(S2CN(CH2C6H5)2)2 and Cu(S2CN(C2H5)2)2 respectively, have also been examined for their suitability as precursors for copper sulfides for the fabrication of photovoltaic materials. Each complex was decomposed thermally and the products analyzed by GC/MS, TGA and FTIR. The dibenzyl derivative complex decomposed at a lower temperature (225-320 C) to yield CuS as the product. The diethyl derivative complex decomposed at a higher temperature (260-325 C) to yield Cu2S. No Cu containing fragments were noted in the mass spectra. Unusual recombination fragments were observed in the mass spectra of the diethyl derivative. Tris(bis(phenylmethyl)carbamodithioato-S,S'), commonly referred to as tris(N,N-dibenzyldithiocarbamato)indium(III), In(S2CNBz2)3, was synthesized and characterized by single crystal X-ray crystallography. The compound crystallizes in the triclinic space group P1(bar) with two molecules per unit cell. The material was further characterized using a novel analytical system employing the combined powers of thermogravimetric analysis, gas chromatography/mass spectrometry, and Fourier transform infrared (FT-IR) spectroscopy to investigate its potential use as a precursor for the chemical vapor deposition (CVD) of thin film materials for photovoltaic applications. Upon heating, the material thermally decomposes to release CS2 and benzyl moieties in to the gas phase, resulting in bulk In2S3. Preliminary spray CVD experiments indicate that In(S2CNBz2)3 decomposed on a Cu substrate reacts to produce stoichiometric CuInS2 films.

  6. Global Climatic Indices Influence on Rainfall Spatiotemporal Distribution : A Case Study from Morocco

    NASA Astrophysics Data System (ADS)

    Elkadiri, R.; Zemzami, M.; Phillips, J.

    2017-12-01

    The climate of Morocco is affected by the Mediterranean Sea, the Atlantic Ocean the Sahara and the Atlas mountains, creating a highly variable spatial and temporal distribution. In this study, we aim to decompose the rainfall in Morocco into global and local signals and understand the contribution of the climatic indices (CIs) on rainfall. These analyses will contribute in understanding the Moroccan climate that is typical of other Mediterranean and North African climatic zones. In addition, it will contribute in a long-term prediction of climate. The constructed database ranges from 1950 to 2013 and consists of monthly data from 147 rainfall stations and 37 CIs data provided mostly by the NOAA Climate Prediction Center. The next general steps were followed: (1) the study area was divided into 9 homogenous climatic regions and weighted precipitation was calculated for each region to reduce the local effects. (2) Each CI was decomposed into nine components of different frequencies (D1 to D9) using wavelet multiresolution analysis. The four lowest frequencies of each CI were selected. (3) Each of the original and resulting signals were shifted from one to six months to account for the effect of the global patterns. The application of steps two and three resulted in the creation of 1225 variables from the original 37 CIs. (4) The final 1225 variables were used to identify links between the global and regional CIs and precipitation in each of the nine homogenous regions using stepwise regression and decision tree. The preliminary analyses and results were focused on the north Atlantic zone and have shown that the North Atlantic Oscillation (PC-based) from NCAR (NAOPC), the Arctic Oscillation (AO), the North Atlantic Oscillation (NAO), the Western Mediterranean Oscillation (WMO) and the Extreme Eastern Tropical Pacific Sea Surface Temperature (NINO12) have the highest correlation with rainfall (33%, 30%, 27%, 21% and -20%, respectively). In addition the 4-months lagged NINO12 and the 6-months lagged NAOPC and WMO have a collective contribution of more than 45% of the rainfall signal. Low frequencies are also represented in the rainfall; especially the 5th and 4th components of the decomposed CIs (48% and 42% of the frequencies, respectively) suggesting their potential contribution in the interannual rainfall variability.

  7. Using Mid Infrared Spectroscopy to Predict the Decomposability of Soil Organic Matter Stored in Arctic Tundra Soils

    USDA-ARS?s Scientific Manuscript database

    The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better unde...

  8. Draft genome sequence of the white-rot fungus Obba rivulosa 3A-2

    Treesearch

    Otto Miettinen; Robert Riley; Kerrie Barry; Daniel Cullen; Ronald P. de Vries; Matthieu Hainaut; Annele Hatakka; Bernard Henrissat; Kristiina Hilden; Rita Kuo; Kurt LaButti; Anna Lipzen; Miia R. Makela; Laura Sandor; Joseph W. Spatafora; Igor V. Grigoriev; David S. Hibbett

    2016-01-01

    We report here the first genome sequence of the white-rot fungus Obba rivulsa (Polyporales, Basidiomycota), a polypore known for its lignin-decomposing ability. The genome is based on the homokaryon 3A-2 originating in Finland. The genome is typical in size and carbohydrate active enzyme (CAZy) content for wood-decomposing basidiomycetes.

  9. Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity

    ERIC Educational Resources Information Center

    Schimmack, Ulrich; Lucas, Richard E.

    2010-01-01

    This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…

  10. Understanding E-Learning Adoption among Brazilian Universities: An Application of the Decomposed Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Dos Santos, Luiz Miguel Renda; Okazaki, Shintaro

    2013-01-01

    This study sheds light on the organizational dimensions underlying e-learning adoption among Brazilian universities. We propose an organizational e-learning adoption model based on the decomposed theory of planned behavior (TPB). A series of hypotheses are posited with regard to the relationships among the proposed constructs. The model is…

  11. Dust to dust - How a human corpse decomposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vass, Arpad Alexander

    2010-01-01

    After death, the human body decomposes through four stages. The final, skeleton stage may be reached as quickly as two weeks or as slowly as two years, depending on temperature, humidity and other environmental conditions where the body lies. Dead bodies emit a surprising array of chemicals, from benzene to freon, which can help forensic scientists find clandestine graves.

  12. Chemical vapor deposition of group IIIB metals

    DOEpatents

    Erbil, A.

    1989-11-21

    Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula given in the patent where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula 1 is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula 1 and a heat decomposable tellurium compound under nonoxidizing conditions.

  13. WELDING PROCESS

    DOEpatents

    Zambrow, J.; Hausner, H.

    1957-09-24

    A method of joining metal parts for the preparation of relatively long, thin fuel element cores of uranium or alloys thereof for nuclear reactors is described. The process includes the steps of cleaning the surfaces to be jointed, placing the sunfaces together, and providing between and in contact with them, a layer of a compound in finely divided form that is decomposable to metal by heat. The fuel element members are then heated at the contact zone and maintained under pressure during the heating to decompose the compound to metal and sinter the members and reduced metal together producing a weld. The preferred class of decomposable compounds are the metal hydrides such as uranium hydride, which release hydrogen thus providing a reducing atmosphere in the vicinity of the welding operation.

  14. Catalytic cartridge SO.sub.3 decomposer

    DOEpatents

    Galloway, Terry R.

    1982-01-01

    A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.

  15. Chemical vapor deposition of group IIIB metals

    DOEpatents

    Erbil, Ahmet

    1989-01-01

    Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula (I) ##STR1## where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula I is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula I and a heat decomposable tellurium compound under nonoxidizing conditions.

  16. Method for forming hermetic seals

    NASA Technical Reports Server (NTRS)

    Gallagher, Brian D.

    1987-01-01

    The firmly adherent film of bondable metal, such as silver, is applied to the surface of glass or other substrate by decomposing a layer of solution of a thermally decomposable metallo-organic deposition (MOD) compound such as silver neodecanoate in xylene. The MOD compound thermally decomposes into metal and gaseous by-products. Sealing is accomplished by depositing a layer of bonding metal, such as solder or a brazing alloy, on the metal film and then forming an assembly with another high melting point metal surface such as a layer of Kovar. When the assembly is heated above the temperature of the solder, the solder flows, wets the adjacent surfaces and forms a hermetic seal between the metal film and metal surface when the assembly cools.

  17. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  18. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  19. Litter type affects the activity of aerobic decomposers in a boreal peatland more than site nutrient and water table regimes

    NASA Astrophysics Data System (ADS)

    Straková, P.; Niemi, R. M.; Freeman, C.; Peltoniemi, K.; Toberman, H.; Heiskanen, I.; Fritze, H.; Laiho, R.

    2011-09-01

    Peatlands are carbon (C) storage ecosystems sustained by a high water table (WT). High WT creates anoxic conditions that suppress the activity of aerobic decomposers and provide conditions for peat accumulation. Peatland function can be dramatically affected by WT drawdown caused by climate and/or land-use change. Aerobic decomposers are directly affected by WT drawdown through environmental factors such as increased oxygenation and nutrient availability. Additionally, they are indirectly affected via changes in plant community composition and litter quality. We studied the relative importance of direct and indirect effects of WT drawdown on aerobic decomposer activity in plant litter at two stages of decomposition (incubated in the field for 1 or 2 years). We did this by profiling 11 extracellular enzymes involved in the mineralization of organic C, nitrogen (N), phosphorus (P) and sulphur. Our study sites represented a three-stage chronosequence from pristine to short-term (years) and long-term (decades) WT drawdown conditions under two nutrient regimes (bog and fen). The litter types included reflected the prevalent vegetation: Sphagnum mosses, graminoids, shrubs and trees. Litter type was the main factor shaping microbial activity patterns and explained about 30 % of the variation in enzyme activities and activity allocation. Overall, enzyme activities were higher in vascular plant litters compared to Sphagnum litters, and the allocation of enzyme activities towards C or nutrient acquisition was related to the initial litter quality (chemical composition). Direct effects of WT regime, site nutrient regime and litter decomposition stage (length of incubation period) summed to only about 40 % of the litter type effect. WT regime alone explained about 5 % of the variation in enzyme activities and activity allocation. Generally, enzyme activity increased following the long-term WT drawdown and the activity allocation turned from P and N acquisition towards C acquisition. This caused an increase in the rate of litter decomposition. The effects of the short-term WT drawdown were minor compared to those of the long-term WT drawdown: e.g., the increase in the activity of C-acquiring enzymes was up to 120 % (bog) or 320 % (fen) higher after the long-term WT drawdown compared to the short-term WT drawdown. In general, the patterns of microbial activity as well as their responses to WT drawdown depended on peatland type: e.g., the shift in activity allocation to C-acquisition was up to 100 % stronger at the fen compared to the bog. Our results imply that changes in plant community composition in response to persistent WT drawdown will strongly affect the C dynamics of peatlands. The predictions of decomposer activity under changing climate and/or land-use thus cannot be based on the direct effects of the changed environment only, but need to consider the indirect effects of environmental changes: the changes in plant community composition, their dependence on peatland type, and their time scale.

  20. Multicasting for all-optical multifiber networks

    NASA Astrophysics Data System (ADS)

    Kã¶Ksal, Fatih; Ersoy, Cem

    2007-02-01

    All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.

  1. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  2. Functional renormalization group and Kohn-Sham scheme in density functional theory

    NASA Astrophysics Data System (ADS)

    Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo

    2018-04-01

    Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.

  3. Shatter cones - An outstanding problem in shock mechanics. [geological impact fracture surface in cratering

    NASA Technical Reports Server (NTRS)

    Milton, D. J.

    1977-01-01

    Shatter cone characteristics are surveyed. Shatter cones, a form of rock fracture in impact structures, apparently form as a shock front interacts with inhomogeneities or discontinuities in the rock. Topics discussed include morphology, conditions of formation, shock pressure of formation, and theories of formation. It is thought that shatter cones are produced within a limited range of shock pressures extending from about 20 to perhaps 250 kbar. Apical angles range from less than 70 deg to over 120 deg. Tentative hypotheses concerning the physical process of shock coning are considered. The range in shock pressures which produce shatter cones might correspond to the range in which shock waves decompose into elastic and deformational fronts.

  4. Vibration energy harvesting with polyphase AC transducers

    NASA Astrophysics Data System (ADS)

    McCullagh, James J.; Scruggs, Jeffrey T.; Asai, Takehiko

    2016-04-01

    Three-phase transduction affords certain advantages in the efficient electromechanical conversion of energy, especially at higher power scales. This paper considers the use of a three-phase electric machine for harvesting energy from vibrations. We consider the use of vector control techniques, which are common in the area of industrial electronics, for optimizing the feedback loops in a stochastically-excited energy harvesting system. To do this, we decompose the problem into two separate feedback loops for direct and quadrature current components, and illustrate how each might be separately optimized to maximize power output. In a simple analytical example, we illustrate how these techniques might be used to gain insight into the tradeoffs in the design of the electronic hardware and the choice of bus voltage.

  5. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  6. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  7. Cluster synchronization induced by one-node clusters in networks with asymmetric negative couplings

    NASA Astrophysics Data System (ADS)

    Zhang, Jianbao; Ma, Zhongjun; Zhang, Gang

    2013-12-01

    This paper deals with the problem of cluster synchronization in networks with asymmetric negative couplings. By decomposing the coupling matrix into three matrices, and employing Lyapunov function method, sufficient conditions are derived for cluster synchronization. The conditions show that the couplings of multi-node clusters from one-node clusters have beneficial effects on cluster synchronization. Based on the effects of the one-node clusters, an effective and universal control scheme is put forward for the first time. The obtained results may help us better understand the relation between cluster synchronization and cluster structures of the networks. The validity of the control scheme is confirmed through two numerical simulations, in a network with no cluster structure and in a scale-free network.

  8. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  9. ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting

    PubMed Central

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561

  10. Simulation of blood flow through an artificial heart

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chang, I-Dee; Rogers, Stuart E.; Kwak, Dochan

    1991-01-01

    A numerical simulation of the incompressible viscous flow through a prosthetic tilting disk heart valve is presented in order to demonstrate the current capability to model unsteady flows with moving boundaries. Both steady state and unsteady flow calculations are done by solving the incompressible Navier-Stokes equations in 3-D generalized curvilinear coordinates. In order to handle the moving boundary problems, the chimera grid embedding scheme which decomposes a complex computational domain into several simple subdomains is used. An algebraic turbulence model for internal flows is incorporated to reach the physiological values of Reynolds number. Good agreement is obtained between the numerical results and experimental measurements. It is found that the tilting disk valve causes large regions of separated flow, and regions of high shear.

  11. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  12. Newton–Hooke-type symmetry of anisotropic oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, P.M., E-mail: zhpm@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr; Laboratoire de Mathématiques et de Physique Théorique, Université de Tours

    2013-06-15

    Rotation-less Newton–Hooke-type symmetry, found recently in the Hill problem, and instrumental for explaining the center-of-mass decomposition, is generalized to an arbitrary anisotropic oscillator in the plane. Conversely, the latter system is shown, by the orbit method, to be the most general one with such a symmetry. Full Newton–Hooke symmetry is recovered in the isotropic case. Star escape from a galaxy is studied as an application. -- Highlights: ► Rotation-less Newton–Hooke (NH) symmetry is generalized to an arbitrary anisotropic oscillator. ► The orbit method is used to find the most general case for rotation-less NH symmetry. ► The NH symmetry ismore » decomposed into Heisenberg algebras based on chiral decomposition.« less

  13. Slow-cycle effects of foliar herbivory alter the nitrogen acquisition and population size of Collembola

    Treesearch

    Mark A. Bradford; Tara Gancos; Christopher J. Frost

    2008-01-01

    In terrestrial systems there is a close relationship between litter quality and the activity and abundance of decomposers. Therefore, the potential exists for aboveground, herbivore-induced changes in foliar chemistry to affect soil decomposer fauna. These herbivore-induced changes in chemistry may persist across growing seasons. While the impacts of such slow-cycle...

  14. A test of the hierarchical model of litter decomposition.

    PubMed

    Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H

    2017-12-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.

  15. Are leaves that fall from imidacloprid-treated maple trees to control Asian longhorned beetles toxic to non-target decomposer organisms?

    PubMed

    Kreutzweiser, David P; Good, Kevin P; Chartrand, Derek T; Scarr, Taylor A; Thompson, Dean G

    2008-01-01

    The systemic insecticide imidacloprid may be applied to deciduous trees for control of the Asian longhorned beetle, an invasive wood-boring insect. Senescent leaves falling from systemically treated trees contain imidacloprid concentrations that could pose a risk to natural decomposer organisms. We examined the effects of foliar imidacloprid concentrations on decomposer organisms by adding leaves from imidacloprid-treated sugar maple trees to aquatic and terrestrial microcosms under controlled laboratory conditions. Imidacloprid in maple leaves at realistic field concentrations (3-11 mg kg(-1)) did not affect survival of aquatic leaf-shredding insects or litter-dwelling earthworms. However, adverse sublethal effects at these concentrations were detected. Feeding rates by aquatic insects and earthworms were reduced, leaf decomposition (mass loss) was decreased, measurable weight losses occurred among earthworms, and aquatic and terrestrial microbial decomposition activity was significantly inhibited. Results of this study suggest that sugar maple trees systemically treated with imidacloprid to control Asian longhorned beetles may yield senescent leaves with residue levels sufficient to reduce natural decomposition processes in aquatic and terrestrial environments through adverse effects on non-target decomposer organisms.

  16. Screening on oil-decomposing microorganisms and application in organic waste treatment machine.

    PubMed

    Lu, Yi-Tong; Chen, Xiao-Bin; Zhou, Pei; Li, Zhen-Hong

    2005-01-01

    As an oil-decomposable mixture of two bacteria strains (Bacillus sp. and Pseudomonas sp.), Y3 was isolated after 50 d domestication under the condition that oil was used as the limited carbon source. The decomposing rate by Y3 was higher than that by each separate individual strain, indicating a synergistic effect of the two bacteria. Under the conditions that T = 25-40 degrees C, pH = 6-8, HRT (Hydraulic retention time) = 36 h and the oil concentration at 0.1%, Y3 yielded the highest decomposing rate of 95.7%. Y3 was also applied in an organic waste treatment machine and a certain rate of activated bacteria was put into the stuffing. A series of tests including humidity, pH, temperature, C/N rate and oil percentage of the stuffing were carried out to check the efficacy of oil-decomposition. Results showed that the oil content of the stuffing with inoculums was only half of that of the control. Furthermore, the bacteria were also beneficial to maintain the stability of the machine operating. Therefore, the bacteria mixture as well as the machines in this study could be very useful for waste treatment.

  17. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  18. Adsorption mechanism of SF6 decomposed species on pyridine-like PtN3 embedded CNT: A DFT study

    NASA Astrophysics Data System (ADS)

    Cui, Hao; Zhang, Xiaoxing; Chen, Dachang; Tang, Ju

    2018-07-01

    Metal-Nx embedded CNT have aroused considerable attention in the field of gas interaction due to their strong catalytic behavior, which provides prospective scopes for gas adsorption and sensing. Detecting SF6 decomposed species in certain devices is essential to guarantee their safe operation. In this work, we performed DFT method and simulated the adsorption of three SF6 decomposed gases (SO2, SOF2 and SO2F2) onto the PtN3 embedded CNT surface, in order to shed light on its adsorption ability and sensing mechanism. Results suggest that the CNT embedded with PtN3 center has strong interaction with these gas molecules, leading to high hybridization between Pt dopant and active atoms inner gas molecules. These interactions are assumed to be chemisorption due to the remarkable Ead and QT, thus resulting in dramatic deformations in electronic structure of PtN3-CNT near the Fermi level. Furthermore, the electronic redistribution cause the conductivity increase of proposed material in three systems, based on frontier molecular orbital theory. Our calculations attempt to suggest novel sensing material that are potentially employed in detection of SF6 decomposed components.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, S. F.; Izumi, N.; Glenn, S.

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  20. Entanglement Rate for Gaussian Continuous Variable Beams

    DTIC Science & Technology

    2016-08-24

    beams can then be decomposed into a sum, because of the additivity of the logarithmic negativity: å w= w> E T E . 4N N 0 ( ) [ ] ( ) Wenote that wEN...2 . 5E T N N 0 ( ) [ ] ( ) This shows that wEN [ ] itselfmay be interpreted as the entanglement rate per frequency interval. It is possible to give an...spectral density of entanglement, since it is closely related to wEN [ ]. Setting º á ñ +w w+ n A A1, 1, 1 2 ˆ ˆ† , º á ñ +w w- - - n A A2, 2, 1 2 ˆ ˆ

  1. Adaptive phase extraction: incorporating the Gabor transform in the matching pursuit algorithm.

    PubMed

    Wacker, Matthias; Witte, Herbert

    2011-10-01

    Short-time Fourier transform (STFT), Gabor transform (GT), wavelet transform (WT), and the Wigner-Ville distribution (WVD) are just some examples of time-frequency analysis methods which are frequently applied in biomedical signal analysis. However, all of these methods have their individual drawbacks. The STFT, GT, and WT have a time-frequency resolution that is determined by algorithm parameters and the WVD is contaminated by cross terms. In 1993, Mallat and Zhang introduced the matching pursuit (MP) algorithm that decomposes a signal into a sum of atoms and uses a cross-term free pseudo-WVD to generate a data-adaptive power distribution in the time-frequency space. Thus, it solved some of the problems of the GT and WT but lacks phase information that is crucial e.g., for synchronization analysis. We introduce a new time-frequency analysis method that combines the MP with a pseudo-GT. Therefore, the signal is decomposed into a set of Gabor atoms. Afterward, each atom is analyzed with a Gabor analysis, where the time-domain gaussian window of the analysis matches that of the specific atom envelope. A superposition of the single time-frequency planes gives the final result. This is the first time that a complete analysis of the complex time-frequency plane can be performed in a fully data-adaptive and frequency-selective manner. We demonstrate the capabilities of our approach on a simulation and on real-life magnetoencephalogram data.

  2. Multiplicative Multitask Feature Learning

    PubMed Central

    Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu

    2016-01-01

    We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735

  3. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  4. Microbial community assembly and metabolic function during mammalian corpse decomposition

    USGS Publications Warehouse

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  5. Microbial community assembly and metabolic function during mammalian corpse decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less

  6. Catalytic cartridge SO/sub 3/ decomposer

    DOEpatents

    Galloway, T.R.

    1980-11-18

    A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO/sub 3/ decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO/sub 3/ gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO/sub 3/ gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety. A fusion reactor may be used as the heat source.

  7. Alcoa Pressure Calcination Process for Alumina

    NASA Astrophysics Data System (ADS)

    Sucech, S. W.; Misra, C.

    A new alumina calcination process developed at Alcoa Laboratories is described. Alumina is calcined in two stages. In the first stage, alumina hydrate is heated indirectly to 500°C in a decomposer vessel. Released water is recovered as process steam at 110 psig pressure. Partial transformation of gibbsite to boehmite occurs under hydrothermal conditions of the decomposer. The product from the decomposer containing about 5% LOI is then calcined by direct heating to 850°C to obtain smelting grade alumina. The final product is highly attrition resistant, has a surface area of 50-80 m2/g and a LOI of less than 1%. Accounting for the recovered steam, the effective fuel consumption for the new calcination process is only 1.6 GJ/t A12O3.

  8. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.

    PubMed

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2017-04-22

    Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

  9. Hybridization of decomposition and local search for multiobjective optimization.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2014-10-01

    Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.

  10. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.

  11. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  12. Factors driving stable growth of He clusters in W: first-principles study

    NASA Astrophysics Data System (ADS)

    Feng, Y. J.; Xin, T. Y.; Xu, Q.; Wang, Y. X.

    2018-07-01

    The evolution of helium (He) bubbles is responsible for the surface morphology variation and subsequent degradation of the properties of plasma-facing materials (PFMs) in nuclear fusion reactors. These severe problems unquestionably trace back to the behavior of He in PFMs, which is closely associated with the interaction between He and the matrix. In this paper, we decomposed the binding energy of the He cluster into three parts, those from W–W, W–He, and He–He interactions, using density functional theory. As a result, we clearly identified the main factors that determine a steplike decrease in the binding energy with increasing number of He atoms, which explains the process of self-trapping and athermal vacancy generation during He cluster growth in the PFM tungsten. The three interactions were found to synergetically shape the features of the steplike decrease in the binding energy. Fairly strong He–He repulsive forces at a short distance, which stem from antibonding states between He atoms, need to be released when additional He atoms are continuously bonded to the He cluster. This causes the steplike feature in the binding energy. The bonding states between W and He atoms in principle facilitate the decreasing trend of the binding energy. The decrease in binding energy with increasing number of He atoms implies that He clusters can grow stably.

  13. ZnO core spike particles and nano-networks and their wide range of applications

    NASA Astrophysics Data System (ADS)

    Wille, S.; Mishra, Y. K.; Gedamu, D.; Kaps, S.; Jin, X.; Koschine, T.; Bathnagar, A.; Adelung, R.

    2011-05-01

    In our approach we are producing a polymer composite material with ZnO core spike particles as concave fillers. The core spike particles are synthesized by a high throughput method. Using PDMS (Polydimethylsiloxane) as a matrix material the core spike particles achieve not only a high mechanical reinforcement but also influence other material properties in a very interesting way, making such a composite very interesting for a wide range of applications. In a very similar synthesis route a nanoscopic ZnO-network is produced. As a ceramic this network can withstand high temperatures like 1300 K. In addition this material is quite elastic. To find a material with these two properties is a really difficult task, as polymers tend to decompose already at lower temperatures and metals melt. Especially under ambient conditions, often oxygen creates a problem for metals at these temperatures. If this material is at the same time a semiconductor, it has a high potential as a multifunctional material. Ceramic or classical semiconductors like III-V or IIVI type are high temperature stable, but typically brittle. This is different on the nanoscale. Even semiconductor wires like silicon with a very small diameter do not easily built up enough stress that leads to a failure while being bent, because in a first order approximation the maximum stress of a fiber scales with its diameter.

  14. Hydrogen-fluorine exchange in NaBH4-NaBF4.

    PubMed

    Rude, L H; Filsø, U; D'Anna, V; Spyratou, A; Richter, B; Hino, S; Zavorotynska, O; Baricco, M; Sørby, M H; Hauback, B C; Hagemann, H; Besenbacher, F; Skibsted, J; Jensen, T R

    2013-11-07

    Hydrogen-fluorine exchange in the NaBH4-NaBF4 system is investigated using a range of experimental methods combined with DFT calculations and a possible mechanism for the reactions is proposed. Fluorine substitution is observed using in situ synchrotron radiation powder X-ray diffraction (SR-PXD) as a new Rock salt type compound with idealized composition NaBF2H2 in the temperature range T = 200 to 215 °C. Combined use of solid-state (19)F MAS NMR, FT-IR and DFT calculations supports the formation of a BF2H2(-) complex ion, reproducing the observation of a (19)F chemical shift at -144.2 ppm, which is different from that of NaBF4 at -159.2 ppm, along with the new absorption bands observed in the IR spectra. After further heating, the fluorine substituted compound becomes X-ray amorphous and decomposes to NaF at ~310 °C. This work shows that fluorine-substituted borohydrides tend to decompose to more stable compounds, e.g. NaF and BF3 or amorphous products such as closo-boranes, e.g. Na2B12H12. The NaBH4-NaBF4 composite decomposes at lower temperatures (300 °C) compared to NaBH4 (476 °C), as observed by thermogravimetric analysis. NaBH4-NaBF4 (1:0.5) preserves 30% of the hydrogen storage capacity after three hydrogen release and uptake cycles compared to 8% for NaBH4 as measured using Sievert's method under identical conditions, but more than 50% using prolonged hydrogen absorption time. The reversible hydrogen storage capacity tends to decrease possibly due to the formation of NaF and Na2B12H12. On the other hand, the additive sodium fluoride appears to facilitate hydrogen uptake, prevent foaming, phase segregation and loss of material from the sample container for samples of NaBH4-NaF.

  15. Fungal-to-bacterial dominance of soil detrital food-webs: Consequences for biogeochemistry

    NASA Astrophysics Data System (ADS)

    Rousk, Johannes; Frey, Serita

    2015-04-01

    Resolving fungal and bacterial groups within the microbial decomposer community is thought to capture disparate microbial life strategies, associating bacteria with an r-selected strategy for carbon (C) and nutrient use, and fungi with a K-selected strategy. Additionally, food-web models have established a widely held belief that the bacterial decomposer pathway in soil supports high turnover rates of easily available substrates, while the slower fungal pathway supports the decomposition of more complex organic material, thus characterising the biogeochemistry of the ecosystem. Three field-experiments to generate gradients of SOC-quality were assessed. (1) the Detritus Input, Removal, and Trenching - DIRT - experiment in a temperate forest in mixed hardwood stands at Harvard Forest LTER, US. There, experimentally adjusted litter input and root input had affected the SOC quality during 23 years. (2) field-application of 14-C labelled glucose to grassland soils, sampled over the course of 13 months to generate an age-gradient of SOM (1 day - 13 months). (3) The Park Grass Experiment at Rothamsted, UK, where 150-years continuous N-fertilisation (0, 50, 100, 150 kg N ha-1 y-1) has affected the quality of SOM in grassland soils. A combination of carbon stable and radio isotope studies, fungal and bacterial growth and biomass measurements, and C and N mineralisation (15N pool dilution) assays were used to investigate how SOC-quality influenced fungal and bacterial food-web pathways and the implications this had for C and nutrient turnover. There was no support that decomposer food-webs dominated by bacteria support high turnover rates of easily available substrates, while slower fungal-dominated decomposition pathways support the decomposition of more complex organic material. Rather, an association between high quality SOC and fungi emerges from the results. This suggests that we need to revise our basic understanding for soil microbial communities and the processes they regulate in soil.

  16. A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications

    PubMed Central

    Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.

    2014-01-01

    Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233

  17. Peat soil properties and erodibility: what factors affect erosion and suspended sediment yields in peat extraction areas?

    NASA Astrophysics Data System (ADS)

    Tuukkanen, Tapio; Marttila, Hannu; Kløve, Bjørn

    2014-05-01

    Peatland drainage and peat extraction operations change soil properties and expose bare peat to erosion forces, resulting in increased suspended sediment (SS) loads to downstream water bodies. SS yields from peat extraction areas are known to vary significantly between sites, but the contribution of peat properties and catchment characteristics to this variation is not well understood. In this study, we investigated peat erosion at 20 Finnish peat extraction sites by conducting in situ and laboratory measurements on peat erodibility and associated peat properties (degree of humification, peat type, bulk density, loss on ignition, porosity, moisture content, and shear strength), and by comparing the results with monitored long-term SS concentrations and loads at each catchment outlet. Here, we used a cohesive strength meter (CSM) to measure direct erosion thresholds for undisturbed soil cores collected from each study site. The results suggested that the degree of peat decomposition clearly affects peat erodibility and explains much of the variation in SS concentration between the study sites. According to CSM tests, critical shear stresses for particle entrainment were lowest (on average) in well-decomposed peat samples, while undecomposed, dry and fiber rich peat generally resisted erosion very well. Furthermore, the results indicated that two separate critical shear stresses often exist in moderately decomposed peat. In these cases, the well-decomposed parts of peat samples eroded first at relatively low shear stresses and remaining peat fibers prevented further erosion until a much higher shear stress was reached. In addition to peat soil properties, the study showed that the erosion of mineral subsoil may play a key role in runoff water SS concentration at peat extraction areas with drainage ditches extending into the mineral soil. The interactions between peat properties and peat erodibility found in this study as well as critical shear stress values obtained can be used for several purposes in e.g. water conservation and sediment management planning for peat extraction areas and other bare peat-covered catchments.

  18. Shelters of leaf-tying herbivores decompose faster than leaves damaged by free-living insects: Implications for nutrient turnover in polluted habitats.

    PubMed

    Kozlov, Mikhail V; Zverev, Vitali; Zvereva, Elena L

    2016-10-15

    Leaf-eating insects can influence decomposition processes by modifying quality of leaf litter, and this impact can be especially pronounced in habitats where leaf-eating insects reach high densities, for example in heavily polluted areas. We hypothesized that the decomposition rate is faster for shelters of leaf-tying larvae than for leaves damaged by free-living insects, in particular due to the accumulation of larval frass within shelters. We exposed litter bags containing samples of three different compositions (shelters built by moth larvae, leaves damaged by free-living insects and intact leaves of mountain birch, Betula pubescens ssp. czerepanovii) for one year at two heavily polluted sites near the nickel-copper smelter at Monchegorsk in north-western Russia and at two unpolluted sites. The decomposition rate of leaves damaged by free-living insects was 91% of that of undamaged leaves, whereas the mass loss of leaves composing shelters did not differ of that of undamaged leaves. These differences between leaves damaged by different guilds of herbivorous insects were uniform across the study sites, although the decomposition rate in polluted sites was reduced to 77% of that in unpolluted sites. Addition of larval frass to undamaged leaves had no effect on the subsequent decomposition rate. Therefore we suggest that damaged leaves tied by shelter-building larvae decompose faster than untied damaged leaves due to a looser physical structure of the litter, which creates favourable conditions for detritivores and soil decomposers. Thus, while leaf damage by insects per se reduces litter quality and its decomposition rate, structuring of litter by leaf-tying insects counterbalances these negative effects. We conclude that leaf-tying larvae, in contrast to free-living defoliators, do not impose negative effects on nutrient turnover rate even at their high densities, which are frequently observed in heavily polluted sites. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  20. Thermal Decomposition Behaviors and Burning Characteristics of AN/Nitramine-Based Composite Propellant

    NASA Astrophysics Data System (ADS)

    Naya, Tomoki; Kohga, Makoto

    2015-04-01

    Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.

Top