Sample records for decomposed parallel processing

  1. Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.

    PubMed

    Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus

    2009-02-01

    While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.

  2. A discrimination-association model for decomposing component processes of the implicit association test.

    PubMed

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  3. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  4. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  5. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  6. A parallelized binary search tree

    USDA-ARS?s Scientific Manuscript database

    PTTRNFNDR is an unsupervised statistical learning algorithm that detects patterns in DNA sequences, protein sequences, or any natural language texts that can be decomposed into letters of a finite alphabet. PTTRNFNDR performs complex mathematical computations and its processing time increases when i...

  7. Parallel distributed, reciprocal Monte Carlo radiation in coupled, large eddy combustion simulations

    NASA Astrophysics Data System (ADS)

    Hunsaker, Isaac L.

    Radiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively.

  8. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  9. LMFAO! Humor as a Response to Fear: Decomposing Fear Control within the Extended Parallel Process Model

    PubMed Central

    Abril, Eulàlia P.; Szczypka, Glen; Emery, Sherry L.

    2017-01-01

    This study seeks to analyze fear control responses to the 2012 Tips from Former Smokers campaign using the Extended Parallel Process Model (EPPM). The goal is to examine the occurrence of ancillary fear control responses, like humor. In order to explore individuals’ responses in an organic setting, we use Twitter data—tweets—collected via the Firehose. Content analysis of relevant fear control tweets (N = 14,281) validated the existence of boomerang responses within the EPPM: denial, defensive avoidance, and reactance. More importantly, results showed that humor tweets were not only a significant occurrence but constituted the majority of fear control responses. PMID:29527092

  10. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  11. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  12. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  13. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  14. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  15. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    NASA Astrophysics Data System (ADS)

    Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua

    2017-12-01

    Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  16. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  17. Aerobic Biodegradation Characteristic of Different Water-Soluble Azo Dyes.

    PubMed

    Sheng, Shixiong; Liu, Bo; Hou, Xiangyu; Wu, Bing; Yao, Fang; Ding, Xinchun; Huang, Lin

    2017-12-26

    This study investigated the biodegradation performance and characteristics of Sudan I and Acid Orange 7 (AO7) to improve the biological dye removal efficiency in wastewater and optimize the treatment process. The dyes with different water-solubility and similar molecular structure were biologically treated under aerobic condition in parallel continuous-flow mixed stirred reactors. The biophase analysis using microscopic examination suggested that the removal process of the two azo dyes is different. Removal of Sudan I was through biosorption, since it easily assembled and adsorbed on the surface of zoogloea due to its insolubility, while AO7 was biodegraded incompletely and bioconverted, the AO7 molecule was decomposed to benzene series and inorganic ions, since it could reach the interior area of zoogloea due to the low oxidation-reduction potential conditions and corresponding anaerobic microorganisms. The transformation of NH₃-N, SO₄ 2- together with the presence of tryptophan-like components confirm that AO7 can be decomposed to non-toxic products in an aerobic bioreactor. This study provides a theoretical basis for the use of biosorption or biodegradation mechanisms for the treatment of different azo dyes in wastewater.

  18. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  19. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  20. A design for an intelligent monitor and controller for space station electrical power using parallel distributed problem solving

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1990-01-01

    The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.

  1. Aerobic Biodegradation Characteristic of Different Water-Soluble Azo Dyes

    PubMed Central

    Sheng, Shixiong; Liu, Bo; Hou, Xiangyu; Wu, Bing; Yao, Fang; Ding, Xinchun; Huang, Lin

    2017-01-01

    This study investigated the biodegradation performance and characteristics of Sudan I and Acid Orange 7 (AO7) to improve the biological dye removal efficiency in wastewater and optimize the treatment process. The dyes with different water-solubility and similar molecular structure were biologically treated under aerobic condition in parallel continuous-flow mixed stirred reactors. The biophase analysis using microscopic examination suggested that the removal process of the two azo dyes is different. Removal of Sudan I was through biosorption, since it easily assembled and adsorbed on the surface of zoogloea due to its insolubility, while AO7 was biodegraded incompletely and bioconverted, the AO7 molecule was decomposed to benzene series and inorganic ions, since it could reach the interior area of zoogloea due to the low oxidation-reduction potential conditions and corresponding anaerobic microorganisms. The transformation of NH3-N, SO42− together with the presence of tryptophan-like components confirm that AO7 can be decomposed to non-toxic products in an aerobic bioreactor. This study provides a theoretical basis for the use of biosorption or biodegradation mechanisms for the treatment of different azo dyes in wastewater. PMID:29278390

  2. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Lesoinne, Michel

    1993-01-01

    Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.

  3. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  4. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  5. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  6. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  7. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  8. Multi-variants synthesis of Petri nets for FPGA devices

    NASA Astrophysics Data System (ADS)

    Bukowiec, Arkadiusz; Doligalski, Michał

    2015-09-01

    There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.

  9. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  10. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  11. A conservative scheme of drift kinetic electrons for gyrokinetic simulation of kinetic-MHD processes in toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Bao, J.; Liu, D.; Lin, Z.

    2017-10-01

    A conservative scheme of drift kinetic electrons for gyrokinetic simulations of kinetic-magnetohydrodynamic processes in toroidal plasmas has been formulated and verified. Both vector potential and electron perturbed distribution function are decomposed into adiabatic part with analytic solution and non-adiabatic part solved numerically. The adiabatic parallel electric field is solved directly from the electron adiabatic response, resulting in a high degree of accuracy. The consistency between electrostatic potential and parallel vector potential is enforced by using the electron continuity equation. Since particles are only used to calculate the non-adiabatic response, which is used to calculate the non-adiabatic vector potential through Ohm's law, the conservative scheme minimizes the electron particle noise and mitigates the cancellation problem. Linear dispersion relations of the kinetic Alfvén wave and the collisionless tearing mode in cylindrical geometry have been verified in gyrokinetic toroidal code simulations, which show that the perpendicular grid size can be larger than the electron collisionless skin depth when the mode wavelength is longer than the electron skin depth.

  12. mHealthMon: toward energy-efficient and distributed mobile health monitoring using parallel offloading.

    PubMed

    Ahnn, Jong Hoon; Potkonjak, Miodrag

    2013-10-01

    Although mobile health monitoring where mobile sensors continuously gather, process, and update sensor readings (e.g. vital signals) from patient's sensors is emerging, little effort has been investigated in an energy-efficient management of sensor information gathering and processing. Mobile health monitoring with the focus of energy consumption may instead be holistically analyzed and systematically designed as a global solution to optimization subproblems. This paper presents an attempt to decompose the very complex mobile health monitoring system whose layer in the system corresponds to decomposed subproblems, and interfaces between them are quantified as functions of the optimization variables in order to orchestrate the subproblems. We propose a distributed and energy-saving mobile health platform, called mHealthMon where mobile users publish/access sensor data via a cloud computing-based distributed P2P overlay network. The key objective is to satisfy the mobile health monitoring application's quality of service requirements by modeling each subsystem: mobile clients with medical sensors, wireless network medium, and distributed cloud services. By simulations based on experimental data, we present the proposed system can achieve up to 10.1 times more energy-efficient and 20.2 times faster compared to a standalone mobile health monitoring application, in various mobile health monitoring scenarios applying a realistic mobility model.

  13. Efficient abstract data type components for distributed and parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastani, F.; Hilal, W.; Iyengar, S.S.

    1987-10-01

    One way of improving software system's comprehensibility and maintainability is to decompose it into several components, each of which encapsulates some information concerning the system. These components can be classified into four categories, namely, abstract data type, functional, interface, and control components. Such a classfication underscores the need for different specification, implementation, and performance-improvement methods for different types of components. This article focuses on the development of high-performance abstract data type components for distributed and parallel environments.

  14. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  15. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  16. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  17. Quantitative analysis of RNA-protein interactions on a massively parallel array for mapping biophysical and evolutionary landscapes

    PubMed Central

    Buenrostro, Jason D.; Chircus, Lauren M.; Araya, Carlos L.; Layton, Curtis J.; Chang, Howard Y.; Snyder, Michael P.; Greenleaf, William J.

    2015-01-01

    RNA-protein interactions drive fundamental biological processes and are targets for molecular engineering, yet quantitative and comprehensive understanding of the sequence determinants of affinity remains limited. Here we repurpose a high-throughput sequencing instrument to quantitatively measure binding and dissociation of MS2 coat protein to >107 RNA targets generated on a flow-cell surface by in situ transcription and inter-molecular tethering of RNA to DNA. We decompose the binding energy contributions from primary and secondary RNA structure, finding that differences in affinity are often driven by sequence-specific changes in association rates. By analyzing the biophysical constraints and modeling mutational paths describing the molecular evolution of MS2 from low- to high-affinity hairpins, we quantify widespread molecular epistasis, and a long-hypothesized structure-dependent preference for G:U base pairs over C:A intermediates in evolutionary trajectories. Our results suggest that quantitative analysis of RNA on a massively parallel array (RNAMaP) relationships across molecular variants. PMID:24727714

  18. Scattered acoustic field above a grating of parallel rectangular cavities

    NASA Astrophysics Data System (ADS)

    Khanfir, A.; Faiz, A.; Ducourneau, J.; Chatillon, J.; Skali Lami, S.

    2013-02-01

    The aim of this research project was to predict the sound pressure above a wall facing composed of N parallel rectangular cavities. The diffracted acoustic field is processed by generalizing the Kobayashi Potential (KP) method used for determining the electromagnetic field diffracted by a rectangular cavity set in a thick screen. This model enables the diffracted field to be expressed in modal form. Modal amplitudes are subsequently calculated using matrix equations obtained by enforcing boundary conditions. Solving these equations allows the determination of the total reflected acoustic field above the wall facing. This model was compared with experimental results obtained in a semi-anechoic room for a single cavity, a periodic array of three rectangular cavities and an aperiodic grating of nine rectangular cavities of different size and spacing. These facings were insonified by an incident spherical acoustic field, which was decomposed into plane waves. The validity of this model is supported by the agreement between the numerical and experimental results observed.

  19. Parallel, confocal, and complete spectrum imager for fluorescent detection of high-density microarray

    NASA Astrophysics Data System (ADS)

    Bogdanov, Valery L.; Boyce-Jacino, Michael

    1999-05-01

    Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.

  20. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  1. Transport in the plateau regime in a tokamak pedestal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seol, J.; Shaing, K. C.

    In a tokamak H-mode, a strong E Multiplication-Sign B flow shear is generated during the L-H transition. Turbulence in a pedestal is suppressed significantly by this E Multiplication-Sign B flow shear. In this case, neoclassical transport may become important. The neoclassical fluxes are calculated in the plateau regime with the parallel plasma flow using their kinetic definitions. In an axisymmetric tokamak, the neoclassical particles fluxes can be decomposed into the banana-plateau flux and the Pfirsch-Schlueter flux. The banana-plateau particle flux is driven by the parallel viscous force and the Pfirsch-Schlueter flux by the poloidal variation of the friction force. Themore » combined quantity of the radial electric field and the parallel flow is determined by the flux surface averaged parallel momentum balance equation rather than requiring the ambipolarity of the total particle fluxes. In this process, the Pfirsch-Schlueter flux does not appear in the flux surface averaged parallel momentum equation. Only the banana-plateau flux is used to determine the parallel flow in the form of the flux surface averaged parallel viscosity. The heat flux, obtained using the solution of the parallel momentum balance equation, decreases exponentially in the presence of sonic M{sub p} without any enhancement over that in the standard neoclassical theory. Here, M{sub p} is a combination of the poloidal E Multiplication-Sign B flow and the parallel mass flow. The neoclassical bootstrap current in the plateau regime is presented. It indicates that the neoclassical bootstrap current also is related only to the banana-plateau fluxes. Finally, transport fluxes are calculated when M{sub p} is large enough to make the parallel electron viscosity comparable with the parallel ion viscosity. It is found that the bootstrap current has a finite value regardless of the magnitude of M{sub p}.« less

  2. Possible Effects of Synaptic Imbalances on Oligodendrocyte–Axonic Interactions in Schizophrenia: A Hypothetical Model

    PubMed Central

    Mitterauer, Bernhard J.; Kofler-Westergren, Birgitta

    2011-01-01

    A model of glial–neuronal interactions is proposed that could be explanatory for the demyelination identified in brains with schizophrenia. It is based on two hypotheses: (1) that glia–neuron systems are functionally viable and important for normal brain function, and (2) that disruption of this postulated function disturbs the glial categorization function, as shown by formal analysis. According to this model, in schizophrenia receptors on astrocytes in glial–neuronal synaptic units are not functional, loosing their modulatory influence on synaptic neurotransmission. Hence, an unconstrained neurotransmission flux occurs that hyperactivates the axon and floods the cognate receptors of neurotransmitters on oligodendrocytes. The excess of neurotransmitters may have a toxic effect on oligodendrocytes and myelin, causing demyelination. In parallel, an increasing impairment of axons may disconnect neuronal networks. It is formally shown how oligodendrocytes normally categorize axonic information processing via their processes. Demyelination decomposes the oligodendrocyte–axonic system making it incapable to generate categories of information. This incoherence may be responsible for symptoms of disorganization in schizophrenia, such as thought disorder, inappropriate affect and incommunicable motor behavior. In parallel, the loss of oligodendrocytes affects gap junctions in the panglial syncytium, presumably responsible for memory impairment in schizophrenia. PMID:21647404

  3. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less

  4. A Parallel Processing Algorithm for Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  5. An approach to enhance pnetCDF performance in ...

    EPA Pesticide Factsheets

    Data intensive simulations are often limited by their I/O (input/output) performance, and "novel" techniques need to be developed in order to overcome this limitation. The software package pnetCDF (parallel network Common Data Form), which works with parallel file systems, was developed to address this issue by providing parallel I/O capability. This study examines the performance of an application-level data aggregation approach which performs data aggregation along either row or column dimension of MPI (Message Passing Interface) processes on a spatially decomposed domain, and then applies the pnetCDF parallel I/O paradigm. The test was done with three different domain sizes which represent small, moderately large, and large data domains, using a small-scale Community Multiscale Air Quality model (CMAQ) mock-up code. The examination includes comparing I/O performance with traditional serial I/O technique, straight application of pnetCDF, and the data aggregation along row and column dimension before applying pnetCDF. After the comparison, "optimal" I/O configurations of this application-level data aggregation approach were quantified. Data aggregation along the row dimension (pnetCDFcr) works better than along the column dimension (pnetCDFcc) although it may perform slightly worse than the straight pnetCDF method with a small number of processors. When the number of processors becomes larger, pnetCDFcr outperforms pnetCDF significantly. If the number of proces

  6. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  7. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  8. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  9. Kill the Song--Steal the Show: What Does Distinguish Predicative Metaphors from Decomposable Idioms?

    ERIC Educational Resources Information Center

    Caillies, Stephanie; Declercq, Christelle

    2011-01-01

    This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing…

  10. Speed-Accuracy Trade-Off in Skilled Typewriting: Decomposing the Contributions of Hierarchical Control Loops

    ERIC Educational Resources Information Center

    Yamaguchi, Motonori; Crump, Matthew J. C.; Logan, Gordon D.

    2013-01-01

    Typing performance involves hierarchically structured control systems: At the higher level, an outer loop generates a word or a series of words to be typed; at the lower level, an inner loop activates the keystrokes comprising the word in parallel and executes them in the correct order. The present experiments examined contributions of the outer-…

  11. Speed-accuracy trade-off in skilled typewriting: decomposing the contributions of hierarchical control loops.

    PubMed

    Yamaguchi, Motonori; Crump, Matthew J C; Logan, Gordon D

    2013-06-01

    Typing performance involves hierarchically structured control systems: At the higher level, an outer loop generates a word or a series of words to be typed; at the lower level, an inner loop activates the keystrokes comprising the word in parallel and executes them in the correct order. The present experiments examined contributions of the outer- and inner-loop processes to the control of speed and accuracy in typewriting. Experiments 1 and 2 involved discontinuous typing of single words, and Experiments 3 and 4 involved continuous typing of paragraphs. Across experiments, typists were able to trade speed for accuracy but were unable to type at rates faster than 100 ms/keystroke, implying limits to the flexibility of the underlying processes. The analyses of the component latencies and errors indicated that the majority of the trade-offs were due to inner-loop processing. The contribution of outer-loop processing to the trade-offs was small, but it resulted in large costs in error rate. Implications for strategic control of automatic processes are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  12. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.

  13. I/O Parallelization for the Goddard Earth Observing System Data Assimilation System (GEOS DAS)

    NASA Technical Reports Server (NTRS)

    Lucchesi, Rob; Sawyer, W.; Takacs, L. L.; Lyster, P.; Zero, J.

    1998-01-01

    The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center (GSFC) has developed the GEOS DAS, a data assimilation system that provides production support for NASA missions and will support NASA's Earth Observing System (EOS) in the coming years. The GEOS DAS will be used to provide background fields of meteorological quantities to EOS satellite instrument teams for use in their data algorithms as well as providing assimilated data sets for climate studies on decadal time scales. The DAO has been involved in prototyping parallel implementations of the GEOS DAS for a number of years and is now embarking on an effort to convert the production version from shared-memory parallelism to distributed-memory parallelism using the portable Message-Passing Interface (MPI). The GEOS DAS consists of two main components, an atmospheric General Circulation Model (GCM) and a Physical-space Statistical Analysis System (PSAS). The GCM operates on data that are stored on a regular grid while PSAS works with observational data that are scattered irregularly throughout the atmosphere. As a result, the two components have different data decompositions. The GCM is decomposed horizontally as a checkerboard with all vertical levels of each box existing on the same processing element(PE). The dynamical core of the GCM can also operate on a rotated grid, which requires communication-intensive grid transformations during GCM integration. PSAS groups observations on PEs in a more irregular and dynamic fashion.

  14. Efficient co-conversion process of chicken manure into protein feed and organic fertilizer by Hermetia illucens L. (Diptera: Stratiomyidae) larvae and functional bacteria.

    PubMed

    Xiao, Xiaopeng; Mazza, Lorenzo; Yu, Yongqiang; Cai, Minmin; Zheng, Longyu; Tomberlin, Jeffery K; Yu, Jeffrey; van Huis, Arnold; Yu, Ziniu; Fasulo, Salvatore; Zhang, Jibin

    2018-07-01

    A chicken manure management process was carried out through co-conversion of Hermetia illucens L. larvae (BSFL) with functional bacteria for producing larvae as feed stuff and organic fertilizer. Thirteen days co-conversion of 1000 kg of chicken manure inoculated with one million 6-day-old BSFL and 10 9  CFU Bacillus subtilis BSF-CL produced aging larvae, followed by eleven days of aerobic fermentation inoculated with the decomposing agent to maturity. 93.2 kg of fresh larvae were harvested from the B. subtilis BSF-CL-inoculated group, while the control group only harvested 80.4 kg of fresh larvae. Chicken manure reduction rate of the B. subtilis BSF-CL-inoculated group was 40.5%, while chicken manure reduction rate of the control group was 35.8%. The weight of BSFL increased by 15.9%, BSFL conversion rate increased by 12.7%, and chicken manure reduction rate increased by 13.4% compared to the control (no B. subtilis BSF-CL). The residue inoculated with decomposing agent had higher maturity (germination index >92%), compared with the no decomposing agent group (germination index ∼86%). The activity patterns of different enzymes further indicated that its production was more mature and stable than that of the no decomposing agent group. Physical and chemical production parameters showed that the residue inoculated with the decomposing agent was more suitable for organic fertilizer than the no decomposing agent group. Both, the co-conversion of chicken manure by BSFL with its synergistic bacteria and the aerobic fermentation with the decomposing agent required only 24 days. The results demonstrate that co-conversion process could shorten the processing time of chicken manure compared to traditional compost process. Gut bacteria could enhance manure conversion and manure reduction. We established efficient manure co-conversion process by black soldier fly and bacteria and harvest high value-added larvae mass and biofertilizer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Do Nonnative Language Speakers "Chew the Fat" and "Spill the Beans" with Different Brain Hemispheres? Investigating Idiom Decomposability with the Divided Visual Field Paradigm

    ERIC Educational Resources Information Center

    Cieslicka, Anna B.

    2013-01-01

    The purpose of this study was to explore possible cerebral asymmetries in the processing of decomposable and nondecomposable idioms by fluent nonnative speakers of English. In the study, native language (Polish) and foreign language (English) decomposable and nondecomposable idioms were embedded in ambiguous (neutral) and unambiguous (biasing…

  16. Parallel architecture for rapid image generation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerheim, R.J.

    1987-01-01

    A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less

  17. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  18. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  19. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  20. Reduced Toxicity Fuel Satellite Propulsion System

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  1. Reduced Toxicity Fuel Satellite Propulsion System Including Plasmatron

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2003-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster. whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  2. Alcoa Pressure Calcination Process for Alumina

    NASA Astrophysics Data System (ADS)

    Sucech, S. W.; Misra, C.

    A new alumina calcination process developed at Alcoa Laboratories is described. Alumina is calcined in two stages. In the first stage, alumina hydrate is heated indirectly to 500°C in a decomposer vessel. Released water is recovered as process steam at 110 psig pressure. Partial transformation of gibbsite to boehmite occurs under hydrothermal conditions of the decomposer. The product from the decomposer containing about 5% LOI is then calcined by direct heating to 850°C to obtain smelting grade alumina. The final product is highly attrition resistant, has a surface area of 50-80 m2/g and a LOI of less than 1%. Accounting for the recovered steam, the effective fuel consumption for the new calcination process is only 1.6 GJ/t A12O3.

  3. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  4. Development and Application of a Parallel LCAO Cluster Method

    NASA Astrophysics Data System (ADS)

    Patton, David C.

    1997-08-01

    CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.

  5. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    NASA Astrophysics Data System (ADS)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  6. Reduced Toxicity Fuel Satellite Propulsion System Including Fuel Cell Reformer with Alcohols Such as Methanol

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  7. System for thermochemical hydrogen production

    DOEpatents

    Werner, R.W.; Galloway, T.R.; Krikorian, O.H.

    1981-05-22

    Method and apparatus are described for joule boosting a SO/sub 3/ decomposer using electrical instead of thermal energy to heat the reactants of the high temperature SO/sub 3/ decomposition step of a thermochemical hydrogen production process driven by a tandem mirror reactor. Joule boosting the decomposer to a sufficiently high temperature from a lower temperature heat source eliminates the need for expensive catalysts and reduces the temperature and consequent materials requirements for the reactor blanket. A particular decomposer design utilizes electrically heated silicon carbide rods, at a temperature of 1250/sup 0/K, to decompose a cross flow of SO/sub 3/ gas.

  8. Cat got your tongue? Using the tip-of-the-tongue state to investigate fixed expressions.

    PubMed

    Nordmann, Emily; Cleland, Alexandra A; Bull, Rebecca

    2013-01-01

    Despite the fact that they play a prominent role in everyday speech, the representation and processing of fixed expressions during language production is poorly understood. Here, we report a study investigating the processes underlying fixed expression production. "Tip-of-the-tongue" (TOT) states were elicited for well-known idioms (e.g., hit the nail on the head) and participants were asked to report any information they could regarding the content of the phrase. Participants were able to correctly report individual words for idioms that they could not produce. In addition, participants produced both figurative (e.g., pretty for easy on the eye) and literal errors (e.g., hammer for hit the nail on the head) when in a TOT state, suggesting that both figurative and literal meanings are active during production. There was no effect of semantic decomposability on overall TOT incidence; however, participants recalled a greater proportion of words for decomposable rather than non-decomposable idioms. This finding suggests there may be differences in how decomposable and non-decomposable idioms are retrieved during production. Copyright © 2013 Cognitive Science Society, Inc.

  9. Kill the song—steal the show: what does distinguish predicative metaphors from decomposable idioms?

    PubMed

    Caillies, Stéphanie; Declercq, Christelle

    2011-06-01

    This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing time of idiomatic and metaphoric expressions. In the first experiment, participants read sentences containing decomposable idioms, predicative metaphors or control expressions and performed a lexical decision task on figurative targets presented 0, 350, and 500 ms, or 750 after reading. Results demonstrated that idiomatic expressions were processed sooner than metaphoric ones. In the second experiment, participants were asked to assess the meaningfulness of idiomatic, metaphoric and literal expressions after reading a verb prime that belongs to the target phrase (identity priming). The results showed that verb identity priming was stronger for idiomatic expressions than for metaphor ones, indicating different mental representations.

  10. Anisotropic three-dimensional inversion of CSEM data using finite-element techniques on unstructured grids

    NASA Astrophysics Data System (ADS)

    Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus

    2018-05-01

    In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.

  11. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  12. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  13. High performance computing environment for multidimensional image analysis

    PubMed Central

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-01-01

    Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099

  14. High performance computing environment for multidimensional image analysis.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  15. Cascading effects of induced terrestrial plant defences on aquatic and terrestrial ecosystem function

    PubMed Central

    Jackrel, Sara L.; Wootton, J. Timothy

    2015-01-01

    Herbivores induce plants to undergo diverse processes that minimize costs to the plant, such as producing defences to deter herbivory or reallocating limited resources to inaccessible portions of the plant. Yet most plant tissue is consumed by decomposers, not herbivores, and these defensive processes aimed to deter herbivores may alter plant tissue even after detachment from the plant. All consumers value nutrients, but plants also require these nutrients for primary functions and defensive processes. We experimentally simulated herbivory with and without nutrient additions on red alder (Alnus rubra), which supplies the majority of leaf litter for many rivers in western North America. Simulated herbivory induced a defence response with cascading effects: terrestrial herbivores and aquatic decomposers fed less on leaves from stressed trees. This effect was context dependent: leaves from fertilized-only trees decomposed most rapidly while leaves from fertilized trees receiving the herbivory treatment decomposed least, suggesting plants funnelled a nutritionally valuable resource into enhanced defence. One component of the defence response was a decrease in leaf nitrogen leading to elevated carbon : nitrogen. Aquatic decomposers prefer leaves naturally low in C : N and this altered nutrient profile largely explains the lower rate of aquatic decomposition. Furthermore, terrestrial soil decomposers were unaffected by either treatment but did show a preference for local and nitrogen-rich leaves. Our study illustrates the ecological implications of terrestrial herbivory and these findings demonstrate that the effects of selection caused by terrestrial herbivory in one ecosystem can indirectly shape the structure of other ecosystems through ecological fluxes across boundaries. PMID:25788602

  16. Application of Direct Parallel Methods to Reconstruction and Forecasting Problems

    NASA Astrophysics Data System (ADS)

    Song, Changgeun

    Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.

  17. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  18. Kinetic model of excess activated sludge thermohydrolysis.

    PubMed

    Imbierowicz, Mirosław; Chacuk, Andrzej

    2012-11-01

    Thermal hydrolysis of excess activated sludge suspensions was carried at temperatures ranging from 423 K to 523 K and under pressure 0.2-4.0 MPa. Changes of total organic carbon (TOC) concentration in a solid and liquid phase were measured during these studies. At the temperature 423 K, after 2 h of the process, TOC concentration in the reaction mixture decreased by 15-18% of the initial value. At 473 K total organic carbon removal from activated sludge suspension increased to 30%. It was also found that the solubilisation of particulate organic matter strongly depended on the process temperature. At 423 K the transfer of TOC from solid particles into liquid phase after 1 h of the process reached 25% of the initial value, however, at the temperature of 523 K the conversion degree of 'solid' TOC attained 50% just after 15 min of the process. In the article a lumped kinetic model of the process of activated sludge thermohydrolysis has been proposed. It was assumed that during heating of the activated sludge suspension to a temperature in the range of 423-523 K two parallel reactions occurred. One, connected with thermal destruction of activated sludge particles, caused solubilisation of organic carbon and an increase of dissolved organic carbon concentration in the liquid phase (hydrolysate). The parallel reaction led to a new kind of unsolvable solid phase, which was further decomposed into gaseous products (CO(2)). The collected experimental data were used to identify unknown parameters of the model, i.e. activation energies and pre-exponential factors of elementary reactions. The mathematical model of activated sludge thermohydrolysis appropriately describes the kinetics of reactions occurring in the studied system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  20. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George C. (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1982-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fluidized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  1. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1984-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fludized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  2. Forensic entomology of decomposing humans and their decomposing pets.

    PubMed

    Sanford, Michelle R

    2015-02-01

    Domestic pets are commonly found in the homes of decedents whose deaths are investigated by a medical examiner or coroner. When these pets become trapped with a decomposing decedent they may resort to feeding on the body or succumb to starvation and/or dehydration and begin to decompose as well. In this case report photographic documentation of cases involving pets and decedents were examined from 2009 through the beginning of 2014. This photo review indicated that in many cases the pets were cats and dogs that were trapped with the decedent, died and were discovered in a moderate (bloat to active decay) state of decomposition. In addition three cases involving decomposing humans and their decomposing pets are described as they were processed for time of insect colonization by forensic entomological approach. Differences in timing and species colonizing the human and animal bodies were noted as was the potential for the human or animal derived specimens to contaminate one another at the scene. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU

    NASA Astrophysics Data System (ADS)

    Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji

    2016-12-01

    Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.

  4. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  5. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  6. A Comparative Study of the Application of Fluorescence Excitation-Emission Matrices Combined with Parallel Factor Analysis and Nonnegative Matrix Factorization in the Analysis of Zn Complexation by Humic Acids

    PubMed Central

    Boguta, Patrycja; Pieczywek, Piotr M.; Sokołowska, Zofia

    2016-01-01

    The main aim of this study was the application of excitation-emission fluorescence matrices (EEMs) combined with two decomposition methods: parallel factor analysis (PARAFAC) and nonnegative matrix factorization (NMF) to study the interaction mechanisms between humic acids (HAs) and Zn(II) over a wide concentration range (0–50 mg·dm−3). The influence of HA properties on Zn(II) complexation was also investigated. Stability constants, quenching degree and complexation capacity were estimated for binding sites found in raw EEM, EEM-PARAFAC and EEM-NMF data using mathematical models. A combination of EEM fluorescence analysis with one of the proposed decomposition methods enabled separation of overlapping binding sites and yielded more accurate calculations of the binding parameters. PARAFAC and NMF processing allowed finding binding sites invisible in a few raw EEM datasets as well as finding totally new maxima attributed to structures of the lowest humification. Decomposed data showed an increase in Zn complexation with an increase in humification, aromaticity and molecular weight of HAs. EEM-PARAFAC analysis also revealed that the most stable compounds were formed by structures containing the highest amounts of nitrogen. The content of oxygen-functional groups did not influence the binding parameters, mainly due to fact of higher competition of metal cation with protons. EEM spectra coupled with NMF and especially PARAFAC processing gave more adequate assessments of interactions as compared to raw EEM data and should be especially recommended for modeling of complexation processes where the fluorescence intensities (FI) changes are weak or where the processes are interfered with by the presence of other fluorophores. PMID:27782078

  7. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability.

    PubMed

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-09-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf 'afterlife' integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence.

  8. WELDING PROCESS

    DOEpatents

    Zambrow, J.; Hausner, H.

    1957-09-24

    A method of joining metal parts for the preparation of relatively long, thin fuel element cores of uranium or alloys thereof for nuclear reactors is described. The process includes the steps of cleaning the surfaces to be jointed, placing the sunfaces together, and providing between and in contact with them, a layer of a compound in finely divided form that is decomposable to metal by heat. The fuel element members are then heated at the contact zone and maintained under pressure during the heating to decompose the compound to metal and sinter the members and reduced metal together producing a weld. The preferred class of decomposable compounds are the metal hydrides such as uranium hydride, which release hydrogen thus providing a reducing atmosphere in the vicinity of the welding operation.

  9. Ecosystem and decomposer effects on litter dynamics along an old field to old-growth forest successional gradient

    EPA Science Inventory

    Identifying the biotic (e.g. decomposers, vegetation) and abiotic (e.g. temperature, moisture) mechanisms controlling litter decomposition is key to understanding ecosystem function, especially where variation in ecosystem structure due to successional processes may alter the str...

  10. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  11. Mapping of MPEG-4 decoding on a flexible architecture platform

    NASA Astrophysics Data System (ADS)

    van der Tol, Erik B.; Jaspers, Egbert G.

    2001-12-01

    In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.

  12. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability

    PubMed Central

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-01-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf ‘afterlife’ integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence. PMID:25535551

  13. Microbial community assembly and metabolic function during mammalian corpse decomposition

    USGS Publications Warehouse

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  14. Microbial community assembly and metabolic function during mammalian corpse decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less

  15. Acute toxicity of live and decomposing green alga Ulva ( Enteromorpha) prolifera to abalone Haliotis discus hannai

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yu, Rencheng; Zhou, Mingjiang

    2011-05-01

    From 2007 to 2009, large-scale blooms of green algae (the so-called "green tides") occurred every summer in the Yellow Sea, China. In June 2008, huge amounts of floating green algae accumulated along the coast of Qingdao and led to mass mortality of cultured abalone and sea cucumber. However, the mechanism for the mass mortality of cultured animals remains undetermined. This study examined the toxic effects of Ulva ( Enteromorpha) prolifera, the causative species of green tides in the Yellow Sea during the last three years. The acute toxicity of fresh culture medium and decomposing algal effluent of U. prolifera to the cultured abalone Haliotis discus hannai were tested. It was found that both fresh culture medium and decomposing algal effluent had toxic effects to abalone, and decomposing algal effluent was more toxic than fresh culture medium. The acute toxicity of decomposing algal effluent could be attributed to the ammonia and sulfide presented in the effluent, as well as the hypoxia caused by the decomposition process.

  16. Plant–herbivore–decomposer stoichiometric mismatches and nutrient cycling in ecosystems

    PubMed Central

    Cherif, Mehdi; Loreau, Michel

    2013-01-01

    Plant stoichiometry is thought to have a major influence on how herbivores affect nutrient availability in ecosystems. Most conceptual models predict that plants with high nutrient contents increase nutrient excretion by herbivores, in turn raising nutrient availability. To test this hypothesis, we built a stoichiometrically explicit model that includes a simple but thorough description of the processes of herbivory and decomposition. Our results challenge traditional views of herbivore impacts on nutrient availability in many ways. They show that the relationship between plant nutrient content and the impact of herbivores predicted by conceptual models holds only at high plant nutrient contents. At low plant nutrient contents, the impact of herbivores is mediated by the mineralization/immobilization of nutrients by decomposers and by the type of resource limiting the growth of decomposers. Both parameters are functions of the mismatch between plant and decomposer stoichiometries. Our work provides new predictions about the impacts of herbivores on ecosystem fertility that depend on critical interactions between plant, herbivore and decomposer stoichiometries in ecosystems. PMID:23303537

  17. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  18. Decomposers and the fire cycle in a phryganic (East Mediterranean) ecosystem.

    PubMed

    Arianoutsou-Faraggitaki, M; Margaris, N S

    1982-06-01

    Dehydrogenase activity, cellulose decomposition, nitrification, and CO2 release were measured for 2 years to estimate the effects of a wildfire over a phryganic ecosystem. In decomposers' subsystem we found that fire mainly affected the nitrification process during the whole period, and soil respiration for the second post-fire year, when compared with the control site. Our data suggest that after 3-4 months the activity of microbial decomposers is almost the same at the two sites, suggesting that fire is not a catastrophic event, but a simple perturbation common to Mediterranean-type ecosystems.

  19. Microbial community assembly and metabolic function during mammalian corpse decomposition.

    PubMed

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-08

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations. Copyright © 2016, American Association for the Advancement of Science.

  20. Process for decomposing nitrates in aqueous solution

    DOEpatents

    Haas, Paul A.

    1980-01-01

    This invention is a process for decomposing ammonium nitrate and/or selected metal nitrates in an aqueous solution at an elevated temperature and pressure. Where the compound to be decomposed is a metal nitrate (e.g., a nuclear-fuel metal nitrate), a hydroxylated organic reducing agent therefor is provided in the solution. In accordance with the invention, an effective proportion of both nitromethane and nitric acid is incorporated in the solution to accelerate decomposition of the ammonium nitrate and/or selected metal nitrate. As a result, decomposition can be effected at significantly lower temperatures and pressures, permitting the use of system components composed of off-the-shelf materials, such as stainless steel, rather than more costly materials of construction. Preferably, the process is conducted on a continuous basis. Fluid can be automatically vented from the reaction zone as required to maintain the operating temperature at a moderate value--e.g., at a value in the range of from about 130.degree.-200.degree. C.

  1. Decomposition Mechanism and Decomposition Promoting Factors of Waste Hard Metal for Zinc Decomposition Process (ZDP)

    NASA Astrophysics Data System (ADS)

    Pee, J. H.; Kim, Y. J.; Kim, J. Y.; Seong, N. E.; Cho, W. S.; Kim, K. J.

    2011-10-01

    Decomposition promoting factors and decomposition mechanism in the zinc decomposition process of waste hard metals which are composed mostly of tungsten carbide and cobalt were evaluated. Zinc volatility amount was suppressed and zinc steam pressure was produced in the reaction graphite crucible inside an electric furnace for ZDP. Reaction was done for 2 hrs at 650 °C, which 100 % decomposed the waste hard metals that were over 30 mm thick. As for the separation-decomposition of waste hard metals, zinc melted alloy formed a liquid composed of a mixture of γ-β1 phase from the cobalt binder layer (reaction interface). The volume of reacted zone was expanded and the waste hard metal layer was decomposed-separated horizontally from the hard metal. Zinc used in the ZDP process was almost completely removed-collected by decantation and volatilization-collection process at 1000 °C. The small amount of zinc remaining in the tungsten carbide-cobalt powder which was completely decomposed was fully removed by using phosphate solution which had a slow cobalt dissolution speed.

  2. Temporal relation between top-down and bottom-up processing in lexical tone perception

    PubMed Central

    Shuai, Lan; Gong, Tao

    2013-01-01

    Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions. PMID:24723863

  3. The microstructure and formation of duplex and black plessite in iron meteorites

    NASA Technical Reports Server (NTRS)

    Zhang, J.; Williams, D. B.; Goldstein, J. I.

    1993-01-01

    Two of the most common plessite structures, duplex and black plessite, in the taenite region of the Windmanstatten pattern of two iron meteorites (Grant and Carlton) are characterized using high-resolution electron microscopy and microanalysis techniques. Two types of gamma precipitates, found in the duplex plessite and black plessite regions, respectively, are identified, and their morphologies are described. The formation of the plessite structure is discussed using the information obtained in this study and results of a parallel investigation of decomposed martensitic Fe-Ni laboratory alloys.

  4. Cognitive costs of decision-making strategies: A resource demand decomposition analysis with a cognitive architecture.

    PubMed

    Fechner, Hanna B; Schooler, Lael J; Pachur, Thorsten

    2018-01-01

    Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort-or cognitive costs-associated with a strategy be conceptualized and measured? We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Parallel community climate model: Description and user`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less

  6. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  7. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  8. Method for the decontamination of soil containing solid organic explosives therein

    DOEpatents

    Radtke, Corey W.; Roberto, Francisco F.

    2000-01-01

    An efficient method for decontaminating soil containing organic explosives ("TNT" and others) in the form of solid portions or chunks which are not ordinarily subject to effective bacterial degradation. The contaminated soil is treated by delivering an organic solvent to the soil which is capable of dissolving the explosives. This process makes the explosives more bioavailable to natural bacteria in the soil which can decompose the explosives. An organic nutrient composition is also preferably added to facilitate decomposition and yield a compost product. After dissolution, the explosives are allowed to remain in the soil until they are decomposed by the bacteria. Decomposition occurs directly in the soil which avoids the need to remove both the explosives and the solvents (which either evaporate or are decomposed by the bacteria). Decomposition is directly facilitated by the solvent pre-treatment process described above which enables rapid bacterial remediation of the soil.

  9. Community structure and estimated contribution of primary consumers (Nematodes and Copepods) of decomposing plant litter (Juncus roemerianus and Rhizophora mangle) in South Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fell, J.W.; Cefalu, R.

    1984-01-01

    The paper discusses the meiofauna associated with decomposing leaf litter from two species of coastal marshland plants: the black needle rush, Juncus roemerianus and the red mangrove, Rhizophora mangle. The following aspects were investigated: (1) types of meiofauna present, especially nematodes; (2) changes in meiofaunal community structures with regard to season, station location, and type of plant litter; (3) amount of nematode and copepod biomass present on the decomposing plant litter; and (4) an estimation of the possible role of the nematodes in the decomposition process. 28 references, 5 figures, 9 tables. (ACR)

  10. Parallel group independent component analysis for massive fMRI data sets.

    PubMed

    Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S

    2017-01-01

    Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.

  11. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  12. Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick

    2017-04-01

    Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.

  13. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition

    NASA Astrophysics Data System (ADS)

    Alavi, Saman; Ripmeester, J. A.

    2010-04-01

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  14. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition.

    PubMed

    Alavi, Saman; Ripmeester, J A

    2010-04-14

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  15. Plant Diversity Impacts Decomposition and Herbivory via Changes in Aboveground Arthropods

    PubMed Central

    Ebeling, Anne; Meyer, Sebastian T.; Abbas, Maike; Eisenhauer, Nico; Hillebrand, Helmut; Lange, Markus; Scherber, Christoph; Vogel, Anja; Weigelt, Alexandra; Weisser, Wolfgang W.

    2014-01-01

    Loss of plant diversity influences essential ecosystem processes as aboveground productivity, and can have cascading effects on the arthropod communities in adjacent trophic levels. However, few studies have examined how those changes in arthropod communities can have additional impacts on ecosystem processes caused by them (e.g. pollination, bioturbation, predation, decomposition, herbivory). Therefore, including arthropod effects in predictions of the impact of plant diversity loss on such ecosystem processes is an important but little studied piece of information. In a grassland biodiversity experiment, we addressed this gap by assessing aboveground decomposer and herbivore communities and linking their abundance and diversity to rates of decomposition and herbivory. Path analyses showed that increasing plant diversity led to higher abundance and diversity of decomposing arthropods through higher plant biomass. Higher species richness of decomposers, in turn, enhanced decomposition. Similarly, species-rich plant communities hosted a higher abundance and diversity of herbivores through elevated plant biomass and C:N ratio, leading to higher herbivory rates. Integrating trophic interactions into the study of biodiversity effects is required to understand the multiple pathways by which biodiversity affects ecosystem functioning. PMID:25226237

  16. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  17. Transcranial Magnetic Stimulation: Decomposing the Processes Underlying Action Preparation.

    PubMed

    Bestmann, Sven; Duque, Julie

    2016-08-01

    Preparing actions requires the operation of several cognitive control processes that influence the state of the motor system to ensure that the appropriate behavior is ultimately selected and executed. For example, some form of competition resolution ensures that the right action is chosen among alternatives, often in the presence of conflict; at the same time, impulse control ought to be deployed to prevent premature responses. Here we review how state-changes in the human motor system during action preparation can be studied through motor-evoked potentials (MEPs) elicited by transcranial magnetic stimulation over the contralateral primary motor cortex (M1). We discuss how the physiological fingerprints afforded by MEPs have helped to decompose some of the dynamic and effector-specific influences on the motor system during action preparation. We focus on competition resolution, conflict and impulse control, as well as on the influence of higher cognitive decision-related variables. The selected examples demonstrate the usefulness of MEPs as physiological readouts for decomposing the influence of distinct, but often overlapping, control processes on the human motor system during action preparation. © The Author(s) 2015.

  18. Process to make structured particles

    DOEpatents

    Knapp, Angela Michelle; Richard, Monique N; Luhrs, Claudia; Blada, Timothy; Phillips, Jonathan

    2014-02-04

    Disclosed is a process for making a composite material that contains structured particles. The process includes providing a first precursor in the form of a dry precursor powder, a precursor liquid, a precursor vapor of a liquid and/or a precursor gas. The process also includes providing a plasma that has a high field zone and passing the first precursor through the high field zone of the plasma. As the first precursor passes through the high field zone of the plasma, at least part of the first precursor is decomposed. An aerosol having a second precursor is provided downstream of the high field zone of the plasma and the decomposed first material is allowed to condense onto the second precursor to from structured particles.

  19. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  20. The Prefrontal Model Revisited: Double Dissociations Between Young Sleep Deprived and Elderly Subjects on Cognitive Components of Performance

    PubMed Central

    Tucker, Adrienne M.; Stern, Yaakov; Basner, Robert C.; Rakitin, Brian C.

    2011-01-01

    Study Objectives: The prefrontal model suggests that total sleep deprivation (TSD) and healthy aging produce parallel cognitive deficits. Here we decompose global performance on two common tasks into component measures of specific cognitive processes to pinpoint the source of impairments in elderly and young TSD participants relative to young controls and to each other. Setting: The delayed letter recognition task (DLR) was performed in 3 studies. The psychomotor vigilance task (PVT) was performed in 1 of the DLR studies and 2 additional studies. Subjects: For DLR, young TSD (n = 20, age = 24.60 ± 0.62 years) and young control (n = 17, age = 24.00 ± 2.42); elderly (n = 26, age = 69.92 ± 1.06). For the PVT, young TSD (n = 18, age = 26.65 ± 4.57) and young control (n = 16, age = 25.19 ± 2.90); elderly (n = 21, age = 71.1 ± 4.92). Measurements and Results: Both elderly and young TSD subjects displayed impaired reaction time (RT), our measure of global performance, on both tasks relative to young controls. After decomposing global performance on the DLR, however, a double dissociation was observed as working memory scanning speed was impaired only in elderly subjects while other components of performance were impaired only by TSD. Similarly, for the PVT a second double dissociation was observed as vigilance impairments were present only in TSD while short-term response preparation effects were altered only in the elderly. Conclusions: The similarity between TSD and the elderly in impaired performance was evident only when examining global RT. In contrast, when specific cognitive components were examined double dissociations were observed between TSD and elderly subjects. This demonstrates the heterogeneity in those cognitive processes impaired in TSD versus the elderly. Citation: Tucker AM; Stern Y; Basner RC; Rakitin BC. The prefrontal model revisited: double dissociations between young sleep deprived and elderly subjects on cognitive components of performance. SLEEP 2011;34(8):1039-1050. PMID:21804666

  1. The Forest Method as a New Parallel Tree Method with the Sectional Voronoi Tessellation

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Mori, Masao; Yoshii, Yuzuru

    1999-09-01

    We have developed a new parallel tree method which will be called the forest method hereafter. This new method uses the sectional Voronoi tessellation (SVT) for the domain decomposition. The SVT decomposes a whole space into polyhedra and allows their flat borders to move by assigning different weights. The forest method determines these weights based on the load balancing among processors by means of the overload diffusion (OLD). Moreover, since all the borders are flat, before receiving the data from other processors, each processor can collect enough data to calculate the gravity force with precision. Both the SVT and the OLD are coded in a highly vectorizable manner to accommodate on vector parallel processors. The parallel code based on the forest method with the Message Passing Interface is run on various platforms so that a wide portability is guaranteed. Extensive calculations with 15 processors of Fujitsu VPP300/16R indicate that the code can calculate the gravity force exerted on 105 particles in each second for some ideal dark halo. This code is found to enable an N-body simulation with 107 or more particles for a wide dynamic range and is therefore a very powerful tool for the study of galaxy formation and large-scale structure in the universe.

  2. Methods for assessing the impact of avermectins on the decomposer community of sheep pastures.

    PubMed

    King, K L

    1993-06-01

    This paper outlines methods which can be used in the field assessment of potentially toxic chemicals such as the avermectins. The procedures focus on measuring the effects of the drug on decomposer organisms and the nutrient cycling process in pastures grazed by sheep. Measurements of decomposer activity are described along with methods for determining dry and organic matter loss and mineral loss from dung to the underlying soil. Sampling methods for both micro- and macro-invertebrates are discussed along with determination of the percentage infection of plant roots with vesicular-arbuscular mycorrhizal fungi. An integrated sampling unit for assessing the ecotoxicity of ivermectin in pastures grazed by sheep is presented.

  3. Vertebrate Decomposition Is Accelerated by Soil Microbes

    PubMed Central

    Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.

    2014-01-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  4. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  5. The FPase properties and morphology changes of a cellulolytic bacterium, Sporocytophaga sp. JL-01, on decomposing filter paper cellulose.

    PubMed

    Wang, Xiuran; Peng, Zhongqi; Sun, Xiaoling; Liu, Dongbo; Chen, Shan; Li, Fan; Xia, Hongmei; Lu, Tiancheng

    2012-01-01

    Sporocytophaga sp. JL-01 is a sliding cellulose degrading bacterium that can decompose filter paper (FP), carboxymethyl cellulose (CMC) and cellulose CF11. In this paper, the morphological characteristics of S. sp. JL-01 growing in FP liquid medium was studied by Scanning Electron Microscope (SEM), and one of the FPase components of this bacterium was analyzed. The results showed that the cell shapes were variable during the process of filter paper cellulose decomposition and the rod shape might be connected with filter paper decomposing. After incubating for 120 h, the filter paper was decomposed significantly, and it was degraded absolutely within 144 h. An FPase1 was purified from the supernatant and its characteristics were analyzed. The molecular weight of the FPase1 was 55 kDa. The optimum pH was pH 7.2 and optimum temperature was 50°C under experiment conditions. Zn(2+) and Co(2+) enhanced the enzyme activity, but Fe(3+) inhibited it.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hesse, Cedar N.; Mueller, Rebecca C.; Vuyisich, Momchilo

    Anthropogenic N deposition alters patterns of C and N cycling in temperate forests, where forest floor litter decomposition is a key process mediated by a diverse community of bacteria and fungi. To track forest floor decomposer activity we generated metatranscriptomes that simultaneously surveyed the actively expressed bacterial and eukaryote genes in the forest floor, to compare the impact of N deposition on the decomposers in two natural maple forests in Michigan, USA, where replicate field plots had been amended with N for 16 years. Site and N amendment responses were compared using about 74,000 carbohydrate active enzyme transcript sequences (CAZymes)more » in each metatranscriptome. Parallel ribosomal RNA (rRNA) surveys of bacterial and fungal biomass and taxonomic composition showed no significant differences in either biomass or OTU richness between the two sites or in response to N. Site and N amendment were not significant variables defining bacterial taxonomic composition, but they were significant for fungal community composition, explaining 17 and 14% of the variability, respectively. The relative abundance of expressed bacterial and fungal CAZymes changed significantly with N amendment in one of the forests, and N-response trends were also identified in the second forest. Although the two ambient forests were similar in community biomass, taxonomic structure and active CAZyme profile, the shifts in active CAZyme profiles in response to N-amendment differed between the sites. One site responded with an over-expression of bacterial CAZymes, and the other site responded with an over-expression of both fungal and different bacterial CAZymes. Both sites showed reduced representation of fungal lignocellulose degrading enzymes in N-amendment plots. The metatranscriptome approach provided a holistic assessment of eukaryote and bacterial gene expression and is applicable to other systems where eukaryotes and bacteria interact.« less

  7. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Ma, X; Singh, K

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less

  8. The Electrolytic Effect on the Catalytic Degradation of Dye and Nitrate Ion by New Ceramic Beads of Natural Minerals and TiO2

    NASA Astrophysics Data System (ADS)

    Sata, Akiyoshi; Sakai, Takako; Goto, Yusuke; Ohta, Toshiyuki; Hayakawa, Katumitu

    2007-05-01

    We have developed a new hybrid ceramic material "Taiyo" as a water processing catalyst. The porous ceramic has a core-shell structure. It decolorized completely the dye solutions as well as the wastewater output after primary water processing by microorganism in a pig farm. This new material showed the acceleration of water purification by applying electric voltage. The degradation of dyes and pig urine output from the primary treatments was accelerated by applying voltage. Nitrate in underground water was also decomposed only by applying voltage, while it was not decomposed without voltage.

  9. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    NASA Astrophysics Data System (ADS)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; Ng, Cho-Kuen; Rivetta, Claudio

    2017-10-01

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. The simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.

  10. Different pathways but same result? Comparing chemistry and biological effects of burned and decomposed litter

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Stefano; Bonanomi, Giuliano; Incerti, Guido; El-Gawad, Ahmed M. Abd; Sarker, Tushar Chandra; Cesarano, Gaspare; Saulino, Luigi; Saracino, Antonio; Castro Rego, Francisco

    2017-04-01

    Litter burning and biological decomposition are oxidative processes co-occurring in many terrestrial ecosystems, producing organic matter with different chemical properties and differently affecting plant growth and soil microbial activity. Here, we tested the chemical convergence hypothesis (i.e. materials with different initial chemistry tend to converge towards a common profile, with similar biological effects, as the oxidative process advances) for burning and decomposition. We compared the molecular composition of 63 organic materials - 7 litter types either fresh, decomposed for 30, 90, 180 days, or heated at 100, 200, 300, 400, 500 °C - as assessed by 13C NMR. We used litter water extracts (5% dw) as treatments in bioassays on plant (Lepidium sativum) and fungal (Aspergillus niger) growth, and a washed quartz sand amended with litter materials (0.5 % dw) to assess heterotrophic respiration by CO2 flux chamber. We observed different molecular variations for materials either burning (i.e. a sharp increase of aromatic C and a decrease of most other fractions above 200 °C) or decomposing (i.e. early increase of alkyl, methoxyl and N-alkyl C and decrease of O-alkyl and di-O-alkyl C fractions). Soil respiration and fungal growth progressively decreased with litter age and temperature. Plant growth underwent an inhibitory effect by untreated litter, more and less rapidly released over decomposing and burning materials, respectively. Correlation analysis between NMR and bioassay data showed that opposite responses for soil respiration and fungi, compared to plants, are related to essentially the same C molecular types. Our findings suggest a functional convergence of decomposed and burnt organic substrates, emerging from the balance between the bioavailability of labile C sources and the presence of recalcitrant and pyrogenic compounds, oppositely affecting different trophic levels.

  11. Cooperativity-regulated parallel pathways of the bacteriorhodopsin photocycle.

    PubMed

    Tokaji, Z

    1995-01-03

    The paper demonstrates that the actinic light density dependence of the millisecond part of the bacteriorhodopsin (BR) photocycle at high pH predicts a model, which is the same in the sequence of the intermediates as concluded previously on the basis of double flash experiments [1992, FEBS Lett. 311, 267-270]. This model consists of the Mf-->N-->BR and M(s)-->BR parallel pathways, the relative yields of which are regulated by cooperative interaction of the BR molecules. The decay of M(s) is always slower than the decay of Mf and described as a direct reprotonation of the Schiff-base from the bulk, and the recovery of the ground-state nearly at the same time. M(s) is decomposed into M'f and M's. The first does not reprotonate, and similarly to Mf, it is suggested to be before the conformational change (switch), which latter process would be just before the decay of Mf. A simple way for the determination of the kinetics is also used. This confirms that the amount of N decreases with increasing fraction cycling and shows that the decay rate of N is independent of the fraction cycling. The differences in the kinetics are compared to each other, and they seem to allow a new way of kinetic evaluation at least under special conditions. The aim of this paper was briefly explained in my poster presented on the VIth International Conference on Retinal Protein (see [14]).

  12. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  13. Organic and inorganic–organic thin film structures by molecular layer deposition: A review

    PubMed Central

    Sundberg, Pia

    2014-01-01

    Summary The possibility to deposit purely organic and hybrid inorganic–organic materials in a way parallel to the state-of-the-art gas-phase deposition method of inorganic thin films, i.e., atomic layer deposition (ALD), is currently experiencing a strongly growing interest. Like ALD in case of the inorganics, the emerging molecular layer deposition (MLD) technique for organic constituents can be employed to fabricate high-quality thin films and coatings with thickness and composition control on the molecular scale, even on complex three-dimensional structures. Moreover, by combining the two techniques, ALD and MLD, fundamentally new types of inorganic–organic hybrid materials can be produced. In this review article, we first describe the basic concepts regarding the MLD and ALD/MLD processes, followed by a comprehensive review of the various precursors and precursor pairs so far employed in these processes. Finally, we discuss the first proof-of-concept experiments in which the newly developed MLD and ALD/MLD processes are exploited to fabricate novel multilayer and nanostructure architectures by combining different inorganic, organic and hybrid material layers into on-demand designed mixtures, superlattices and nanolaminates, and employing new innovative nanotemplates or post-deposition treatments to, e.g., selectively decompose parts of the structure. Such layer-engineered and/or nanostructured hybrid materials with exciting combinations of functional properties hold great promise for high-end technological applications. PMID:25161845

  14. An organization of a digital subsystem for generating spacecraft timing and control signals

    NASA Technical Reports Server (NTRS)

    Perlman, M.

    1972-01-01

    A modulo-M counter (of clock pulses) is decomposed into parallel modulo-m sub i counters, where each m sub i is a prime power divisor of M. The modulo-p sub i counters are feedback shift registers which cycle through p sub i distinct states. By this organization, every possible nontrivial data frame subperiod and delayed subperiod may be derived. The number of clock pulses required to bring every modulo-p sub i counter to a respective designated state or count is determined by the Chinese remainder theorem. This corresponds to the solution of simultaneous congruences over relatively prime moduli.

  15. Decomposition of cellulose by ultrasonic welding in water

    NASA Astrophysics Data System (ADS)

    Nomura, Shinfuku; Miyagawa, Seiya; Mukasa, Shinobu; Toyota, Hiromichi

    2016-07-01

    The use of ultrasonic welding in water to decompose cellulose placed in water was examined experimentally. Filter paper was used as the decomposition material with a horn-type transducer 19.5 kHz adopted as the ultrasonic welding power source. The frictional heat at the point where the surface of the tip of the ultrasonic horn contacts the filter paper decomposes the cellulose in the filter paper into 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharide through hydrolysis and thermolysis that occurs in the welding process.

  16. A Greener Arctic: Vascular Plant Litter Input in Subarctic Peat Bogs Changes Soil Invertebrate Diets and Decomposition Patterns

    NASA Astrophysics Data System (ADS)

    Krab, E. J.; Berg, M. P.; Aerts, R.; van Logtestijn, R. S. P.; Cornelissen, H. H. C.

    2014-12-01

    Climate-change-induced trends towards shrub dominance in subarctic, moss-dominated peatlands will most likely have large effects on soil carbon (C) dynamics through an input of more easily decomposable litter. The mechanisms by which this increase in vascular litter input interacts with the abundance and diet-choice of the decomposer community to alter C-processing have, however, not yet been unraveled. We used a novel 13C tracer approach to link invertebrate species composition (Collembola), abundance and species-specific feeding behavior to C-processing of vascular and peat moss litters. We incubated different litter mixtures, 100% Sphagnum moss litter, 100% Betula leaf litter, and a 50/50 mixture of both, in mesocosms for 406 days. We revealed the transfer of C from the litters to the soil invertebrate species by 13C labeling of each of the litter types and assessed 13C signatures of the invertebrates Collembola species composition differed significantly between Sphagnum and Betula litter. Within the 'single type litter' mesocosms, Collembola species showed different 13C signatures, implying species-specific differences in diet choice. Surprisingly, the species composition and Collembola abundance changed relatively little as a consequence of Betula input to a Sphagnum based system. Their diet choice, however, changed drastically; species-specific differences in diet choice disappeared and approximately 67% of the food ingested by all Collembola originated from Betula litter. Furthermore, litter decomposition patterns corresponded to these findings; mass loss of Betula increased from 16.1% to 26.2% when decomposing in combination with Sphagnum, while Sphagnum decomposed even slower in combination with Betula litter (1.9%) than alone (4.7%). This study is the first to empirically show that collective diet shifts of the peatland decomposer community from mosses towards vascular plant litter may drive altered decomposition patterns. In addition, we showed that although species-specific differences in Collembola feeding behavior appear to exist, species are very plastic in their diet. This implies that changes in C turnover rates with vegetation shifts, might well be due to diet shifts of the present decomposer community rather than by changes in species composition.

  17. Decomposing intuitive components in a conceptual problem solving task.

    PubMed

    Reber, Rolf; Ruch-Monachon, Marie-Antoinette; Perrig, Walter J

    2007-06-01

    Research into intuitive problem solving has shown that objective closeness of participants' hypotheses were closer to the accurate solution than their subjective ratings of closeness. After separating conceptually intuitive problem solving from the solutions of rational incremental tasks and of sudden insight tasks, we replicated this finding by using more precise measures in a conceptual problem-solving task. In a second study, we distinguished performance level, processing style, implicit knowledge and subjective feeling of closeness to the solution within the problem-solving task and examined the relationships of these different components with measures of intelligence and personality. Verbal intelligence correlated with performance level in problem solving, but not with processing style and implicit knowledge. Faith in intuition, openness to experience, and conscientiousness correlated with processing style, but not with implicit knowledge. These findings suggest that one needs to decompose processing style and intuitive components in problem solving to make predictions on effects of intelligence and personality measures.

  18. Three-dimensional ceramic molding process based on microstereolithography for the production of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Maruo, Shoji; Sugiyama, Kenji; Daicho, Yuya; Monri, Kensaku

    2014-03-01

    A three-dimensional (3-D) molding process using a master polymer mold produced by microstereolithography has been developed for the production of piezoelectric ceramic elements. In this method, ceramic slurry is injected into a 3-D polymer mold via a centrifugal casting process. The polymer master mold is thermally decomposed so that complex 3-D piezoelectric ceramic elements can be produced. As an example of 3-D piezoelectric ceramic elements, we produced a spiral piezoelectric element that can convert multidirectional loads into a voltage. It was confirmed that a prototype of the spiral piezoelectric element could generate a voltage by applying a load in both parallel and lateral directions in relation to the helical axis. The power output of 123 pW was obtained by applying the maximum load of 2.8N at 2 Hz along the helical axis. In addition, to improve the performance of power generation, we utilized a two-step sintering process to obtain dense piezoelectric elements. As a result, we obtained a sintering body with relative density of 92.8%. Piezoelectric constant d31 of the sintered body attained to -40.0 pC/N. Furthermore we analyzed the open-circuit voltage of the spiral piezoelectric element using COMSOL multiphysics. As a result, it was found that use of patterned electrodes according to the surface potential distribution of the spiral piezoelectric element had a potential to provide high output voltage that was 20 times larger than that of uniform electrodes.

  19. The Reciprocal Relations between Morphological Processes and Reading

    ERIC Educational Resources Information Center

    Kruk, Richard S.; Bergman, Krista

    2013-01-01

    Reciprocal relations between emerging morphological processes and reading skills were examined in a longitudinal study tracking children from Grade 1 through Grade 3. The aim was to examine predictive relationships between productive morphological processing involving composing and decomposing of inflections and derivations, reading ability for…

  20. Development of processes for the production of solar grade silicon from halides and alkali metals, phase 1 and phase 2

    NASA Technical Reports Server (NTRS)

    Dickson, C. R.; Gould, R. K.; Felder, W.

    1981-01-01

    High temperature reactions of silicon halides with alkali metals for the production of solar grade silicon are described. Product separation and collection processes were evaluated, measure heat release parameters for scaling purposes and effects of reactants and/or products on materials of reactor construction were determined, and preliminary engineering and economic analysis of a scaled up process were made. The feasibility of the basic process to make and collect silicon was demonstrated. The jet impaction/separation process was demonstrated to be a purification process. The rate at which gas phase species from silicon particle precursors, the time required for silane decomposition to produce particles, and the competing rate of growth of silicon seed particles injected into a decomposing silane environment were determined. The extent of silane decomposition as a function of residence time, temperature, and pressure was measured by infrared absorption spectroscopy. A simplistic model is presented to explain the growth of silicon in a decomposing silane enviroment.

  1. Thermodynamic analysis of trimethylgallium decomposition during GaN metal organic vapor phase epitaxy

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji

    2018-04-01

    We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.

  2. Foliar pH as a new plant trait: can it explain variation in foliar chemistry and carbon cycling processes among subarctic plant species and types?

    PubMed

    Cornelissen, J H C; Quested, H M; van Logtestijn, R S P; Pérez-Harguindeguy, N; Gwynn-Jones, D; Díaz, S; Callaghan, T V; Press, M C; Aerts, R

    2006-03-01

    Plant traits have become popular as predictors of interspecific variation in important ecosystem properties and processes. Here we introduce foliar pH as a possible new plant trait, and tested whether (1) green leaf pH or leaf litter pH correlates with biochemical and structural foliar traits that are linked to biogeochemical cycling; (2) there is consistent variation in green leaf pH or leaf litter pH among plant types as defined by nutrient uptake mode and higher taxonomy; (3) green leaf pH can predict a significant proportion of variation in leaf digestibility among plant species and types; (4) leaf litter pH can predict a significant proportion of variation in leaf litter decomposability among plant species and types. We found some evidence in support of all four hypotheses for a wide range of species in a subarctic flora, although cryptogams (fern allies and a moss) tended to weaken the patterns by showing relatively poor leaf digestibility or litter decomposability at a given pH. Among seed plant species, green leaf pH itself explained only up to a third of the interspecific variation in leaf digestibility and leaf litter up to a quarter of the interspecific variation in leaf litter decomposability. However, foliar pH substantially improved the power of foliar lignin and/or cellulose concentrations as predictors of these processes when added to regression models as a second variable. When species were aggregated into plant types as defined by higher taxonomy and nutrient uptake mode, green-specific leaf area was a more powerful predictor of digestibility or decomposability than any of the biochemical traits including pH. The usefulness of foliar pH as a new predictive trait, whether or not in combination with other traits, remains to be tested across more plant species, types and biomes, and also in relation to other plant or ecosystem traits and processes.

  3. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  4. 50 CFR 260.103 - Operations and operating procedures shall be in accordance with an effective sanitation program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cause contamination of foods by oil, dust, paint, scale, fumes, grinding materials, decomposed food... partially processed food ingredients shall not be stacked in such manner as to permit contamination of the... PROCESSED FISHERY PRODUCTS, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS INSPECTION...

  5. Detecting Forest Disturbance Events from MODIS and Landsat Time Series for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Ganguly, S.; Saatchi, S. S.; Hagen, S. C.; Harris, N.; Yu, Y.; Nemani, R. R.

    2013-12-01

    Spatial and temporal patterns of forest disturbance and regrowth processes are key for understanding aboveground terrestrial vegetation biomass and carbon stocks at regional-to-continental scales. The NASA Carbon Monitoring System (CMS) program seeks key input datasets, especially information related to impacts due to natural/man-made disturbances in forested landscapes of Conterminous U.S. (CONUS), that would reduce uncertainties in current carbon stock estimation and emission models. This study provides a end-to-end forest disturbance detection framework based on pixel time series analysis from MODIS (Moderate Resolution Imaging Spectroradiometer) and Landsat surface spectral reflectance data. We applied the BFAST (Breaks for Additive Seasonal and Trend) algorithm to the Normalized Difference Vegetation Index (NDVI) data for the time period from 2000 to 2011. A harmonic seasonal model was implemented in BFAST to decompose the time series to seasonal and interannual trend components in order to detect abrupt changes in magnitude and direction of these components. To apply the BFAST for whole CONUS, we built a parallel computing setup for processing massive time-series data using the high performance computing facility of the NASA Earth Exchange (NEX). In the implementation process, we extracted the dominant deforestation events from the magnitude of abrupt changes in both seasonal and interannual components, and estimated dates for corresponding deforestation events. We estimated the recovery rate for deforested regions through regression models developed between NDVI values and time since disturbance for all pixels. A similar implementation of the BFAST algorithm was performed over selected Landsat scenes (all Landsat cloud free data was used to generate NDVI from atmospherically corrected spectral reflectances) to demonstrate the spatial coherence in retrieval layers between MODIS and Landsat. In future, the application of this largely parallel disturbance detection setup will facilitate large scale processing and wall-to-wall mapping of forest disturbance and regrowth of Landsat data for the whole of CONUS. This exercise will aid in improving the present capabilities of the NASA CMS effort in reducing uncertainties in national-level estimates of biomass and carbon stocks.

  6. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    DOE PAGES

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; ...

    2017-10-10

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we presentmore » the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. Furthermore, the simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.« less

  7. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we presentmore » the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. Furthermore, the simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.« less

  9. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  11. Thermochemical process for recovering Cu from CuO or CuO.sub.2

    DOEpatents

    Richardson, deceased, Donald M.; Bamberger, Carlos E.

    1981-01-01

    A process for producing hydrogen comprises the step of reacting metallic Cu with Ba(OH).sub.2 in the presence of steam to produce hydrogen and BaCu.sub.2 O.sub.2. The BaCu.sub.2 O.sub.2 is reacted with H.sub.2 O to form Cu.sub.2 O and a Ba(OH).sub.2 product for recycle to the initial reaction step. Cu can be obtained from the Cu.sub.2 O product by several methods. In one embodiment the Cu.sub.2 O is reacted with HF solution to provide CuF.sub.2 and Cu. The CuF.sub.2 is reacted with H.sub.2 O to provide CuO and HF. CuO is decomposed to Cu.sub.2 O and O.sub.2. The HF, Cu and Cu.sub.2 O are recycled. In another embodiment the Cu.sub.2 O is reacted with aqueous H.sub.2 SO.sub.4 solution to provide CuSO.sub.4 solution and Cu. The CuSO.sub.4 is decomposed to CuO and SO.sub.3. The CuO is decomposed to form Cu.sub.2 O and O.sub.2. The SO.sub.3 is dissolved to form H.sub.2 SO.sub.4. H.sub.2 SO.sub.4, Cu and Cu.sub.2 O are recycled. In another embodiment Cu.sub.2 O is decomposed electrolytically to Cu and O.sub.2. In another aspect of the invention, Cu is recovered from CuO by the steps of decomposing CuO to Cu.sub.2 O and O.sub.2, reacting the Cu.sub.2 O with aqueous HF solution to produce Cu and CuF.sub.2, reacting the CuF.sub.2 with H.sub.2 O to form CuO and HF, and recycling the CuO and HF to previous reaction steps.

  12. Are leaves that fall from imidacloprid-treated maple trees to control Asian longhorned beetles toxic to non-target decomposer organisms?

    PubMed

    Kreutzweiser, David P; Good, Kevin P; Chartrand, Derek T; Scarr, Taylor A; Thompson, Dean G

    2008-01-01

    The systemic insecticide imidacloprid may be applied to deciduous trees for control of the Asian longhorned beetle, an invasive wood-boring insect. Senescent leaves falling from systemically treated trees contain imidacloprid concentrations that could pose a risk to natural decomposer organisms. We examined the effects of foliar imidacloprid concentrations on decomposer organisms by adding leaves from imidacloprid-treated sugar maple trees to aquatic and terrestrial microcosms under controlled laboratory conditions. Imidacloprid in maple leaves at realistic field concentrations (3-11 mg kg(-1)) did not affect survival of aquatic leaf-shredding insects or litter-dwelling earthworms. However, adverse sublethal effects at these concentrations were detected. Feeding rates by aquatic insects and earthworms were reduced, leaf decomposition (mass loss) was decreased, measurable weight losses occurred among earthworms, and aquatic and terrestrial microbial decomposition activity was significantly inhibited. Results of this study suggest that sugar maple trees systemically treated with imidacloprid to control Asian longhorned beetles may yield senescent leaves with residue levels sufficient to reduce natural decomposition processes in aquatic and terrestrial environments through adverse effects on non-target decomposer organisms.

  13. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  14. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  15. A quantification method for heat-decomposable methylglyoxal oligomers and its application on 1,3,5-trimethylbenzene SOA

    NASA Astrophysics Data System (ADS)

    Rodigast, Maria; Mutzel, Anke; Herrmann, Hartmut

    2017-03-01

    Methylglyoxal forms oligomeric compounds in the atmospheric aqueous particle phase, which could establish a significant contribution to the formation of aqueous secondary organic aerosol (aqSOA). Thus far, no suitable method for the quantification of methylglyoxal oligomers is available despite the great effort spent for structure elucidation. In the present study a simplified method was developed to quantify heat-decomposable methylglyoxal oligomers as a sum parameter. The method is based on the thermal decomposition of oligomers into methylglyoxal monomers. Formed methylglyoxal monomers were detected using PFBHA (o-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride) derivatisation and gas chromatography-mass spectrometry (GC/MS) analysis. The method development was focused on the heating time (varied between 15 and 48 h), pH during the heating process (pH = 1-7), and heating temperature (50, 100 °C). The optimised values of these method parameters are presented. The developed method was applied to quantify heat-decomposable methylglyoxal oligomers formed during the OH-radical oxidation of 1,3,5-trimethylbenzene (TMB) in the Leipzig aerosol chamber (LEipziger AerosolKammer, LEAK). Oligomer formation was investigated as a function of seed particle acidity and relative humidity. A fraction of heat-decomposable methylglyoxal oligomers of up to 8 % in the produced organic particle mass was found, highlighting the importance of those oligomers formed solely by methylglyoxal for SOA formation. Overall, the present study provides a new and suitable method for quantification of heat-decomposable methylglyoxal oligomers in the aqueous particle phase.

  16. Cholesky-decomposed density MP2 with density fitting: Accurate MP2 and double-hybrid DFT energies for large systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de

    2014-06-14

    Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less

  17. On the Development of Arabic Three-Digit Number Processing in Primary School Children

    ERIC Educational Resources Information Center

    Mann, Anne; Moeller, Korbinian; Pixner, Silvia; Kaufmann, Liane; Nuerk, Hans-Christoph

    2012-01-01

    The development of two-digit number processing in children, and in particular the influence of place-value understanding, has recently received increasing research interest. However, place-value influences leading to decomposed processing have not yet been investigated for multi-digit numbers beyond the two-digit number range in children.…

  18. Wavelet-Based Processing for Fiber Optic Sensing Systems

    NASA Technical Reports Server (NTRS)

    Hamory, Philip J. (Inventor); Parker, Allen R., Jr. (Inventor)

    2016-01-01

    The present invention is an improved method of processing conglomerate data. The method employs a Triband Wavelet Transform that decomposes and decimates the conglomerate signal to obtain a final result. The invention may be employed to improve performance of Optical Frequency Domain Reflectometry systems.

  19. Tangent linear super-parameterization: attributable, decomposable moist processes for tropical variability studies

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.

    2015-12-01

    An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.

  20. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases*

    PubMed Central

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-01-01

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (PIntr) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a 14C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site −3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. PMID:26468285

  1. Analyzing the Multiscale Processes in Tropical Cyclone Genesis Associated with African Easterly Waves using the PEEMD. Part I: Downscaling Processes

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Shen, B. W.; Cheung, S.

    2016-12-01

    Recent advance in high-resolution global hurricane simulations and visualizations have collectively suggested the importance of both downscaling and upscaling processes in the formation and intensification of TCs. To reveal multiscale processes from massive volume of global data for multiple years, a scalable Parallel Ensemble Empirical Mode Decomposition (PEEMD) method has been developed for the analysis. In this study, the PEEMD is applied to analyzing 10-year (2004-2013) ERA-Interim global 0.750 resolution reanalysis data to explore the role of the downscaling processes in tropical cyclogenesis associated with African Easterly Waves (AEWs). Using the PEEMD, raw data are decomposed into oscillatory Intrinsic Function Modes (IMFs) that represent atmospheric systems of the various length scales and the trend mode that represents a non-oscillatory large scale environmental flow. Among oscillatory modes, results suggest that the third oscillatory mode (IMF3) is statistically correlated with the TC/AEW scale systems. Therefore, IMF3 and trend mode are analyzed in details. Our 10-year analysis shows that more than 50% of the AEW associated hurricanes reveal the association of storms' formation with the significant downscaling shear transfer from the larger-scale trend mode to the smaller scale IMF3. Future work will apply the PEEMD to the analysis of higher-resolution datasets to explore the role of the upscaling processes provided by the convection (or TC) in the development of the TC (or AEW). Figure caption: The tendency for horizontal wind shear for the total winds (black line), IMF3 (blue line), and trend mode (red line) and SLP (black dotted line) along the storm track of Helene (2006).

  2. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases.

    PubMed

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-11-27

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (P(Intr)) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a (14)C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site -3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  3. From master slave interferometry to complex master slave interferometry: theoretical work

    NASA Astrophysics Data System (ADS)

    Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian

    2018-03-01

    A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.

  4. Measuring and modeling C flux rates through the central metabolic pathways in microbial communities using position-specific 13C-labeled tracers

    NASA Astrophysics Data System (ADS)

    Dijkstra, P.; van Groenigen, K.; Hagerty, S.; Salpas, E.; Fairbanks, D. E.; Hungate, B. A.; KOCH, G. W.; Schwartz, E.

    2012-12-01

    The production of energy and metabolic precursors occurs in well-known processes such as glycolysis and Krebs cycle. We use position-specific 13C-labeled metabolic tracers, combined with models of microbial metabolic organization, to analyze the response of microbial community energy production, biosynthesis, and C use efficiency (CUE) in soils, decomposing litter, and aquatic communities. The method consists of adding position-specific 13C -labeled metabolic tracers to parallel soil incubations, in this case 1-13C and 2,3-13C pyruvate and 1-13C and U-13C glucose. The measurement of CO2 released from the labeled tracers is used to calculate the C flux rates through the various metabolic pathways. A simplified metabolic model consisting of 23 reactions is solved using results of the metabolic tracer experiments and assumptions of microbial precursor demand. This new method enables direct estimation of fundamental aspects of microbial energy production, CUE, and soil organic matter formation in relatively undisturbed microbial communities. We will present results showing the range of metabolic patterns observed in these communities and discuss results from testing metabolic models.

  5. Forest floor community metatranscriptomes identify fungal and bacterial responses to N deposition in two maple forests

    DOE PAGES

    Hesse, Cedar N.; Mueller, Rebecca C.; Vuyisich, Momchilo; ...

    2015-04-23

    Anthropogenic N deposition alters patterns of C and N cycling in temperate forests, where forest floor litter decomposition is a key process mediated by a diverse community of bacteria and fungi. To track forest floor decomposer activity we generated metatranscriptomes that simultaneously surveyed the actively expressed bacterial and eukaryote genes in the forest floor, to compare the impact of N deposition on the decomposers in two natural maple forests in Michigan, USA, where replicate field plots had been amended with N for 16 years. Site and N amendment responses were compared using about 74,000 carbohydrate active enzyme transcript sequences (CAZymes)more » in each metatranscriptome. Parallel ribosomal RNA (rRNA) surveys of bacterial and fungal biomass and taxonomic composition showed no significant differences in either biomass or OTU richness between the two sites or in response to N. Site and N amendment were not significant variables defining bacterial taxonomic composition, but they were significant for fungal community composition, explaining 17 and 14% of the variability, respectively. The relative abundance of expressed bacterial and fungal CAZymes changed significantly with N amendment in one of the forests, and N-response trends were also identified in the second forest. Although the two ambient forests were similar in community biomass, taxonomic structure and active CAZyme profile, the shifts in active CAZyme profiles in response to N-amendment differed between the sites. One site responded with an over-expression of bacterial CAZymes, and the other site responded with an over-expression of both fungal and different bacterial CAZymes. Both sites showed reduced representation of fungal lignocellulose degrading enzymes in N-amendment plots. The metatranscriptome approach provided a holistic assessment of eukaryote and bacterial gene expression and is applicable to other systems where eukaryotes and bacteria interact.« less

  6. Extraction of drainage networks from large terrain datasets using high throughput computing

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  7. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  8. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  9. System and process for production of magnesium metal and magnesium hydride from magnesium-containing salts and brines

    DOEpatents

    McGrail, Peter B.; Nune, Satish K.; Motkuri, Radha K.; Glezakou, Vassiliki-Alexandra; Koech, Phillip K.; Adint, Tyler T.; Fifield, Leonard S.; Fernandez, Carlos A.; Liu, Jian

    2016-11-22

    A system and process are disclosed for production of consolidated magnesium metal products and alloys with selected densities from magnesium-containing salts and feedstocks. The system and process employ a dialkyl magnesium compound that decomposes to produce the Mg metal product. Energy requirements and production costs are lower than for conventional processing.

  10. Fungi: Strongmen of the Underground.

    ERIC Educational Resources Information Center

    Morrell, Patricia D.; Morrell, Jeffrey J.

    1999-01-01

    Presents an activity that stresses the role of fungi and decomposers, highlights the rapidity by which they complete this process, and allows students to experiment with ways to control the rate of decomposition. (CCM)

  11. Synthesis, Characterization, and Processing of Copper, Indium, and Gallium Dithiocarbamates for Energy Conversion Applications

    NASA Technical Reports Server (NTRS)

    Duraj, S. A.; Duffy, N. V.; Hepp, A. F.; Cowen, J. E.; Hoops, M. D.; Brothrs, S. M.; Baird, M. J.; Fanwick, P. E.; Harris, J. D.; Jin, M. H.-C.

    2009-01-01

    Ten dithiocarbamate complexes of indium(III) and gallium(III) have been prepared and characterized by elemental analysis, infrared spectra and melting point. Each complex was decomposed thermally and its decomposition products separated and identified with the combination of gas chromatography/mass spectrometry. Their potential utility as photovoltaic materials precursors was assessed. Bis(dibenzyldithiocarbamato)- and bis(diethyldithiocarbamato)copper(II), Cu(S2CN(CH2C6H5)2)2 and Cu(S2CN(C2H5)2)2 respectively, have also been examined for their suitability as precursors for copper sulfides for the fabrication of photovoltaic materials. Each complex was decomposed thermally and the products analyzed by GC/MS, TGA and FTIR. The dibenzyl derivative complex decomposed at a lower temperature (225-320 C) to yield CuS as the product. The diethyl derivative complex decomposed at a higher temperature (260-325 C) to yield Cu2S. No Cu containing fragments were noted in the mass spectra. Unusual recombination fragments were observed in the mass spectra of the diethyl derivative. Tris(bis(phenylmethyl)carbamodithioato-S,S'), commonly referred to as tris(N,N-dibenzyldithiocarbamato)indium(III), In(S2CNBz2)3, was synthesized and characterized by single crystal X-ray crystallography. The compound crystallizes in the triclinic space group P1(bar) with two molecules per unit cell. The material was further characterized using a novel analytical system employing the combined powers of thermogravimetric analysis, gas chromatography/mass spectrometry, and Fourier transform infrared (FT-IR) spectroscopy to investigate its potential use as a precursor for the chemical vapor deposition (CVD) of thin film materials for photovoltaic applications. Upon heating, the material thermally decomposes to release CS2 and benzyl moieties in to the gas phase, resulting in bulk In2S3. Preliminary spray CVD experiments indicate that In(S2CNBz2)3 decomposed on a Cu substrate reacts to produce stoichiometric CuInS2 films.

  12. Determination of polycyclic aromatic hydrocarbons by four-way parallel factor analysis in presence of humic acid.

    PubMed

    Yang, Ruifang; Zhao, Nanjing; Xiao, Xue; Yu, Shaohui; Liu, Jianguo; Liu, Wenqing

    2016-01-05

    There is not effective method to solve the quenching effect of quencher in fluorescence spectra measurement and recognition of polycyclic aromatic hydrocarbons in aquatic environment. In this work, a four-way dataset combined with four-way parallel factor analysis is used to identify and quantify polycyclic aromatic hydrocarbons in the presence of humic acid, a fluorescent quencher and an ubiquitous substance in aquatic system, through modeling the quenching effect of humic acid by decomposing the four-way dataset into four loading matrices corresponding to relative concentration, excitation spectra, emission spectra and fluorescence quantum yield, respectively. It is found that Phenanthrene, pyrene, anthracene and fluorene can be recognized simultaneously with the similarities all above 0.980 between resolved spectra and reference spectra. Moreover, the concentrations of them ranging from 0 to 8μgL(-1) in the test samples prepared with river water could also be predicted successfully with recovery rate of each polycyclic aromatic hydrocarbon between 100% and 120%, which were higher than those of three-way PARAFAC. These results demonstrate that the combination of four-way dataset with four-way parallel factor analysis could be a promising method to recognize the fluorescence spectra of polycyclic aromatic hydrocarbons in the presence of fluorescent quencher from both qualitative and quantitative perspective. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. DOMAIN DECOMPOSITION METHOD APPLIED TO A FLOW PROBLEM Norberto C. Vera Guzmán Institute of Geophysics, UNAM

    NASA Astrophysics Data System (ADS)

    Vera, N. C.; GMMC

    2013-05-01

    In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.

  14. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  15. Chemical oxidation for mitigation of UV-quenching substances (UVQS) from municipal landfill leachate: Fenton process versus ozonation.

    PubMed

    Jung, Chanil; Deng, Yang; Zhao, Renzun; Torrens, Kevin

    2017-01-01

    UV-quenching substance (UVQS), as an emerging municipal solid waste (MSW)-derived leachate contaminant, has a potential to interfere with UV disinfection when leachate is disposed of at publicly owned treatment works (POTWs). The objective of this study was to evaluate and compare two chemical oxidation processes under different operational conditions, i.e. Fenton process and ozonation, for alleviation of UV 254 absorbance of a biologically pre-treated landfill leachate. Results showed that leachate UV 254 absorbance was reduced due to the UVQS decomposition by hydroxyl radicals (·OH) during Fenton treatment, or by ozone (O 3 ) and ·OH during ozonation. Fenton process exhibited a better treatment performance than ozonation under their respective optimal conditions, because ·OH could effectively decompose both hydrophobic and hydrophilic dissolved organic matter (DOM), but O 3 tended to selectively oxidize hydrophobic compounds alone. Different analytical techniques, including molecular weight (MW) fractionation, hydrophobic/hydrophilic isolation, UV spectra scanning, parallel factor (PARAFAC) analysis, and fluorescence excitation-emission matrix spectrophotometry, were used to characterize UVQS. After either oxidation treatment, residual UVQS was more hydrophilic with a higher fraction of low MW molecules. It should be noted that the removed UV 254 absorbance (ΔUV 254 ) was directly proportional to the removed COD (ΔCOD) for the both treatments (Fenton process: ΔUV 254  = 0.011ΔCOD; ozonation: ΔUV 254  = 0.016ΔCOD). A greater ΔUV 254 /ΔCOD was observed for ozonation, suggesting that oxidant was more efficiently utilized during ozonation than in Fenton treatment for mitigation of the UV absorbance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Thermochemical hydrogen production based on magnetic fusion

    NASA Astrophysics Data System (ADS)

    Krikorian, O. H.; Brown, L. C.

    Preliminary results of a DoE study to define the configuration and production costs for a Tandem Mirror Reactor (TMR) heat source H2 fuel production plant are presented. The TMR uses the D-T reaction to produce thermal energy and dc electrical current, with an Li blanket employed to breed more H-3 for fuel. Various blanket designs are being considered, and the coupling of two of them, a heat pipe blanket to a Joule-boosted decomposer, and a two-temperature zone blanket to a fluidized bed decomposer, are discussed. The thermal energy would be used in an H2SO4 thermochemical cycler to produce the H2. The Joule-boosted decomposer, involving the use of electrically heated commercial SiC furnace elements to transfer process heat to the thermochemical H2 cycle, is found to yield H2 fuel at a cost of $12-14/GJ, which is the projected cost of fossil fuels in 30-40 yr, when the TMR H2 production facility would be operable.

  17. Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*

    DOE PAGES

    Bank, R.; Falgout, R. D.; Jones, T.; ...

    2015-10-29

    In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less

  18. Some thoughts about parallel process and psychotherapy supervision: when is a parallel just a parallel?

    PubMed

    Watkins, C Edward

    2012-09-01

    In a way not done before, Tracey, Bludworth, and Glidden-Tracey ("Are there parallel processes in psychotherapy supervision: An empirical examination," Psychotherapy, 2011, advance online publication, doi.10.1037/a0026246) have shown us that parallel process in psychotherapy supervision can indeed be rigorously and meaningfully researched, and their groundbreaking investigation provides a nice prototype for future supervision studies to emulate. In what follows, I offer a brief complementary comment to Tracey et al., addressing one matter that seems to be a potentially important conceptual and empirical parallel process consideration: When is a parallel just a parallel? PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  20. Nonplanar on-shell diagrams and leading singularities of scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Chen, Baoyi; Chen, Gang; Cheung, Yeuk-Kwan E.; Li, Yunxuan; Xie, Ruofei; Xin, Yuan

    2017-02-01

    Bipartite on-shell diagrams are the latest tool in constructing scattering amplitudes. In this paper we prove that a Britto-Cachazo-Feng-Witten (BCFW) decomposable on-shell diagram process a rational top form if and only if the algebraic ideal comprised the geometrical constraints are shifted linearly during successive BCFW integrations. With a proper geometric interpretation of the constraints in the Grassmannian manifold, the rational top form integration contours can thus be obtained, and understood, in a straightforward way. All rational top form integrands of arbitrary higher loops leading singularities can therefore be derived recursively, as long as the corresponding on-shell diagram is BCFW decomposable.

  1. Effect of hydrogen radical on decomposition of chlorosilane source gases

    NASA Astrophysics Data System (ADS)

    Sumiya, Masatomo; Akizuki, Tomohiro; Itaka, Kenji; Kubota, Makoto; Tsubouchi, Kenta; Ishigaki, Takamasa; Koinuma, Hideomi

    2013-06-01

    The effect of hydrogen radical on production of Si from chlorosilane sources has been studied. We used hydrogen radical generated from pulsed thermal plasma to decompose SiHCl3 and SiCl4. Hydrogen radical was effective for lowering the temperature to produce Si from SiHCl3. SiCl4 source, which was chemically stable and by-product in Siemens process, was decomposed effectively by hydrogen radical. The decomposition of SiCl4 was consistent with the thermo-dynamical calculation predicting that the use of hydrogen radical could drastically enhance the yield of Si production rather than case of H2 gas.

  2. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  3. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  4. Energy densification of biomass-derived organic acids

    DOEpatents

    Wheeler, M. Clayton; van Walsum, G. Peter; Schwartz, Thomas J.; van Heiningen, Adriaan

    2013-01-29

    A process for upgrading an organic acid includes neutralizing the organic acid to form a salt and thermally decomposing the resulting salt to form an energy densified product. In certain embodiments, the organic acid is levulinic acid. The process may further include upgrading the energy densified product by conversion to alcohol and subsequent dehydration.

  5. PRODUCTION OF METALS

    DOEpatents

    Spedding, F.H.; Wilhelm, H.A.; Keller, W.H.

    1961-09-19

    A process is described producing metallic thorium, titanium, zirconium, or hafnium from the fluoride. In the process, the fluoride is reduced with alkali or alkaline earth metal and a booster compound (e.g. iodine or a decomposable oxysalt) in a sealed bomb at superatmospheric pressure and a temperature above the melting point of the metal to be produced.

  6. An Empirical Investigation of the Impact of the Anchor and Adjustment Heuristic on the Audit Judgment Process

    DTIC Science & Technology

    1988-01-01

    Pe ~ ** . . . ’ S .- ..% - - -- - - An Empirical Investigation of the Impact of the Anchor and Adjustment Heuristic on the Audit Judgment Process A...1 Introduction ....... ............... 1 Audit Opinion Process ... ............ 2 Professional Judgment ..... ........... 5 Heuristics in the Audit Process...to evaluating the results of analytic reviews and internal control compliance tests (Felix and Kinney 1982, also Libby 1981). Decomposing the audit opinion

  7. Fast decomposition of two ultrasound longitudinal waves in cancellous bone using a phase rotation parameter for bone quality assessment: Simulation study.

    PubMed

    Taki, Hirofumi; Nagatani, Yoshiki; Matsukawa, Mami; Kanai, Hiroshi; Izumi, Shin-Ichi

    2017-10-01

    Ultrasound signals that pass through cancellous bone may be considered to consist of two longitudinal waves, which are called fast and slow waves. Accurate decomposition of these fast and slow waves is considered to be highly beneficial in determination of the characteristics of cancellous bone. In the present study, a fast decomposition method using a wave transfer function with a phase rotation parameter was applied to received signals that have passed through bovine bone specimens with various bone volume to total volume (BV/TV) ratios in a simulation study, where the elastic finite-difference time-domain method is used and the ultrasound wave propagated parallel to the bone axes. The proposed method succeeded to decompose both fast and slow waves accurately; the normalized residual intensity was less than -19.5 dB when the specimen thickness ranged from 4 to 7 mm and the BV/TV value ranged from 0.144 to 0.226. There was a strong relationship between the phase rotation value and the BV/TV value. The ratio of the peak envelope amplitude of the decomposed fast wave to that of the slow wave increased monotonically with increasing BV/TV ratio, indicating the high performance of the proposed method in estimation of the BV/TV value in cancellous bone.

  8. Wage Discrimination in the Reemployment Process.

    ERIC Educational Resources Information Center

    Mavromaras, Kostas G.; Rudolph, Helmut

    1997-01-01

    Wage discrimination by gender in reemployment was examined by decomposing the wage gap upon reemployment. Results suggest that employers are using discriminatory hiring practices that are less likely to be detected and harder to prove in court. (SK)

  9. [Spectral characteristics of decomposition of incorporated straw in compound polluted arid loess].

    PubMed

    Fan, Chun-Hui; Zhang, Ying-Chao; Xu, Ji-Ting; Wang, Jia-Hong

    2014-04-01

    The original loess from western China was used as soil sample, the spectral methods of scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS), elemental analysis, Fourier transform infrared spectroscopy (FT-IR) and 13C nuclear magnetic resonance (13C NMR) were used to investigate the characteristics of decomposed straw and formed humic acids in compound polluted arid loess. The SEM micrographs show the variation from dense to decomposed surface, and finally to damaged structure, and the EDS data reveal the phenomenon of element transfer. The newly-formed humic acids are of low aromaticity, helpful for increasing the activity of organic matters in loess. The FTIR spectra in the whole process are similar, indicating the complexity of transformation dynamics of humic acids. The molecular structure of humic acids becomes simpler, shown from 13C NMR spectra. The spectral methods are useful for humic acids identification in loess region in straw incorporation process.

  10. Removal of methylmercury and tributyltin (TBT) using marine microorganisms.

    PubMed

    Lee, Seong Eon; Chung, Jin Wook; Won, Ho Shik; Lee, Dong Sup; Lee, Yong-Woo

    2012-02-01

    Two marine species of bacteria were isolated that are capable of degrading organometallic contaminants: Pseudomonas balearica, which decomposes methylmercury; and Shewanella putrefaciens, which decomposes tributyltin. P. balearica decomposed 97% of methylmercury (20.0 μg/L) into inorganic mercury after 3 h, while S. putrefaciens decomposed 88% of tributyltin (55.3 μg Sn/L) in real wastewater after 36 h. These data indicate that the two bacteria efficiently decomposed the targeted substances and may be applied to real wastewater.

  11. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  12. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  13. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  14. Anglesite and silver recovery from jarosite residues through roasting and sulfidization-flotation in zinc hydrometallurgy.

    PubMed

    Han, Haisheng; Sun, Wei; Hu, Yuehua; Jia, Baoliang; Tang, Honghu

    2014-08-15

    Hazardous jarosite residues contain abundant valuable minerals that are difficult to be recovered by traditional flotation process. This study presents a new route, roasting combined with sulfidization-flotation, for the recovery of anglesite and silver from jarosite residues of zinc hydrometallurgy. Surface appearance and elemental distribution of jarosite residues was examined by scanning electron microscopy and energy dispersive X-ray spectrometry analysis, respectively. Decomposition and transformation mechanisms of jarosite residues were illustrated by differential thermal analysis. Results showed that after roasting combined with flotation, the grade and recovery of lead were 43.89% and 66.86%, respectively, and those of silver were 1.3 kg/t and 81.60%, respectively. At 600-700 °C, jarosite was decomposed to release encapsulated valuable minerals such as anglesite (PbSO4) and silver mineral; silver jarosite decomposed into silver sulfate (Ag2SO4); and zinc ferrite (ZnO · Fe2O3) decomposed into zinc sulfate (ZnSO4) and hematite (Fe2O3). Bared anglesite and silver minerals were modified by sodium sulfide and easily collected by flotation collectors. This study demonstrates that the combination of roasting and sulfidization-flotation provides a promising process for the recovery of zinc, lead, and silver from jarosite residues of zinc hydrometallurgy. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Decomposed bodies--still an unrewarding autopsy?

    PubMed

    Ambade, Vipul Namdeorao; Keoliya, Ajay Narmadaprasad; Deokar, Ravindra Baliram; Dixit, Pradip Gangadhar

    2011-04-01

    One of the classic mistakes in forensic pathology is to regard the autopsy of decomposed body as unrewarding. The present study was undertaken with a view to debunk this myth and to determine the characteristic pattern in decomposed bodies brought for medicolegal autopsy. From a total of 4997 medicolegal deaths reported at an Apex Medical Centre, Yeotmal, a rural district of Maharashtra over seven year study period, only 180 cases were decomposed, representing 3.6% of the total medicolegal autopsies with the rate of 1.5 decomposed body/100,000 population per year. Male (79.4%) predominance was seen in decomposed bodies with male female ratio of 3.9:1. Most of the victims were between the ages of 31 and 60 years with peak at 31-40 years (26.7%) followed by 41-50 years (19.4%). Older age above 60 years was found in 8.6% cases. Married (64.4%) outnumbered unmarried ones in decomposition. Most of the decomposed bodies were complete (83.9%) and identified (75%). But when the body was incomplete/mutilated or skeletonised then 57.7% of the deceased remains unidentified. The cause and manner of death was ascertained in 85.6% and 81.1% cases respectively. Drowning (35.6%) was the commonest cause of death in decomposed bodies with suicide (52.8%) as the commonest manner of death. Decomposed bodies were commonly recovered from open places (43.9%), followed by water sources (43.3%) and enclosed place (12.2%). Most of the decomposed bodies were retrieved from well (49 cases) followed by barren land (27 cases) and forest (17 cases). 83.8% of the decomposed bodies were recovered before 72 h and only in 16.2% cases the time since death was more than 72 h, mostly recovered from barren land, forest and river. Most of the decomposed bodies were found in summer season (42.8%) with peak in the month of May. Despite technical difficulties in handling the body and artefactual alteration of the tissue, the decomposed body may still reveal cause and manner of death in significant number of cases. Copyright © 2011 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  17. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  18. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE PAGES

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    2018-03-26

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  19. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  20. Recovery of decomposition and soil microarthropod communities in a clearcut watershed in the Southern Appalachians

    Treesearch

    Liam Heneghan; Alissa Salmore

    2014-01-01

    The recovery of ecosystems after disturbance remains a productive theme for ecological research. Numerous studies have focused either on the reestablishment of biological communities or on the recovery of ecosystem processes after perturbations. In the case of decomposer organisms an the processes of organic matter decay and the mineralization of nutrients, the...

  1. Interaction of Substrate and Nutrient Availability on wood Biofilm Processes in Streams

    Treesearch

    Jennifer L. Tank; J.R. Webster

    1998-01-01

    We examined the effect of decomposing leaf litter and dissolved inorganic nutrients on the heterotrophic biofilm of submerged wood in streams with and without leaves. Leaf litter was excluded from one headwater stream in August 1993 at Coweeta Hydrologic Laboratory in the southern Appalachian Mountains. We compared microbial processes on wood in the litter-excluded...

  2. Pathogen analysis of NYSDOT road-killed deer carcass compost facilities.

    DOT National Transportation Integrated Search

    2008-09-01

    Composting of deer carcasses was effective in reducing pathogen levels, decomposing the : carcasses and producing a useable end product after 12 months. The composting process used in this project : involved enveloping the carcasses of road-killed de...

  3. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  4. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-01-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  5. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-08-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  6. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  7. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  8. Robot acting on moving bodies (RAMBO): Preliminary results

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David

    1989-01-01

    A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  9. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  10. Fast Detection of Material Deformation through Structural Dissimilarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth

    2015-10-29

    Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less

  11. Optimizing the inner loop of the gravitational force interaction on modern processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Michael S

    2010-12-08

    We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less

  12. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  13. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  14. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  15. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  16. DEMONSTRATION BULLETIN: IN SITU VITRIFICATION - GEOSAFE CORPORATION

    EPA Science Inventory

    in Situ Vitrification (ISV) is designed to treat soils, sludges, sediments, and mine tailings contaminated with organic and inorganic compounds. The process uses electrical current to heat (mett) and vitrify the soil in place. Organic contaminants are decomposed by the extreme h...

  17. 9 CFR 590.510 - Classifications of shell eggs used in the processing of egg products.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... to include black rots, white rots, mixed rots, green whites, eggs with diffused blood in the albumen... any other filthy and decomposed eggs including the following: (1) Any egg with visible foreign matter...

  18. 9 CFR 590.510 - Classifications of shell eggs used in the processing of egg products.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... to include black rots, white rots, mixed rots, green whites, eggs with diffused blood in the albumen... any other filthy and decomposed eggs including the following: (1) Any egg with visible foreign matter...

  19. Method of carbon dioxide-free hydrogen production from hydrocarbon decomposition over metal salts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlebacher, Jonah; Gaskey, Bernard

    A process to decompose methane into carbon (graphitic powder) and hydrogen (H.sub.2 gas) without secondary production of carbon dioxide, employing a cycle in which a secondary chemical is recycled and reused, is disclosed.

  20. Modeling diffusion control on organic matter decomposition in unsaturated soil pore space

    NASA Astrophysics Data System (ADS)

    Vogel, Laure; Pot, Valérie; Garnier, Patricia; Vieublé-Gonod, Laure; Nunan, Naoise; Raynaud, Xavier; Chenu, Claire

    2014-05-01

    Soil Organic Matter decomposition is affected by soil structure and water content, but field and laboratory studies about this issue conclude to highly variable outcomes. Variability could be explained by the discrepancy between the scale at which key processes occur and the measurements scale. We think that physical and biological interactions driving carbon transformation dynamics can be best understood at the pore scale. Because of the spatial disconnection between carbon sources and decomposers, the latter rely on nutrient transport unless they can actively move. In hydrostatic case, diffusion in soil pore space is thus thought to regulate biological activity. In unsaturated conditions, the heterogeneous distribution of water modifies diffusion pathways and rates, thus affects diffusion control on decomposition. Innovative imaging and modeling tools offer new means to address these effects. We have developed a new model based on the association between a 3D Lattice-Boltzmann Model and an adimensional decomposition module. We designed scenarios to study the impact of physical (geometry, saturation, decomposers position) and biological properties on decomposition. The model was applied on porous media with various morphologies. We selected three cubic images of 100 voxels side from µCT-scanned images of an undisturbed soil sample at 68µm resolution. We used LBM to perform phase separation and obtained water phase distributions at equilibrium for different saturation indices. We then simulated the diffusion of a simple soluble substrate (glucose) and its consumption by bacteria. The same mass of glucose was added as a pulse at the beginning of all simulations. Bacteria were placed in few voxels either regularly spaced or concentrated close to or far from the glucose source. We modulated physiological features of decomposers in order to weight them against abiotic conditions. We could evidence several effects creating unequal substrate access conditions for decomposers, hence inducing contrasted decomposition kinetics: position of bacteria relative to the substrate diffusion pathways, diffusion rate and hydraulic connectivity between bacteria and substrate source, local substrate enrichment due to restricted mass transfer. Physiological characteristics had a strong impact on decomposition only when glucose diffused easily but not when diffusion limitation prevailed. This suggests that carbon dynamics should not be considered to derive from decomposers' physiology alone but rather from the interactions of biological and physical processes at the microscale.

  1. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  2. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  3. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  4. Automated Interactive Simulation Model (AISIM) VAX Version 5.0 Training Manual.

    DTIC Science & Technology

    1987-05-29

    action, activity, decision , etc. that consumes time. The entity is automatically created by the system when an ACTION Primitive is placed. 1.3.2.4 The...MODELED SYSTEM 1.3.2.1 The Process Entity. A Process is used to represent the operations, decisions , actions or activities that can be decomposed and...is associated with the Action entity described below, is included in Process definitions to indicate the time a certain Action (or process, decision

  5. Process for decomposing lignin in biomass

    DOEpatents

    Rector, Kirk Davin; Lucas, Marcel; Wagner, Gregory Lawrence; Kimball, David Bryan; Hanson, Susan Kloek

    2014-10-28

    A mild inexpensive process for treating lignocellulosic biomass involves oxidative delignification of wood using an aqueous solution prepared by dissolving a catalytic amount of manganese (III) acetate into water and adding hydrogen peroxide. Within 4 days and without agitation, the solution was used to convert poplar wood sections into a fine powder-like delignified, cellulose rich materials that included individual wood cells.

  6. The prospect of hazardous sludge reduction through gasification process

    NASA Astrophysics Data System (ADS)

    Hakiki, R.; Wikaningrum, T.; Kurniawan, T.

    2018-01-01

    Biological sludge generated from centralized industrial WWTP is classified as toxic and hazardous waste based on the Indonesian’s Government Regulation No. 101/2014. The amount of mass and volume of sludge produced have an impact in the cost to manage or to dispose. The main objective of this study is to identify the opportunity of gasification technology which can be applied to reduce hazardous sludge quantity before sending to the final disposal. This preliminary study covers the technical and economic assessment of the application of gasification process, which was a combination of lab-scale experimental results and assumptions based on prior research. The results showed that the process was quite effective in reducing the amount and volume of hazardous sludge which results in reducing the disposal costs without causing negative impact on the environment. The reduced mass are moisture and volatile carbon which are decomposed, while residues are fix carbon and other minerals which are not decomposed by thermal process. The economical simulation showed that the project will achieve payback period in 2.5 years, IRR value of 53 % and BC Ratio of 2.3. The further study in the pilot scale to obtain the more accurate design and calculations is recommended.

  7. Mössbauer study of iron in high oxidation states in the K Fe O system

    NASA Astrophysics Data System (ADS)

    Dedushenko, Sergey K.; Perfiliev, Yurii D.; Saprykin, Aleksandr A.

    2008-07-01

    Oxidation of metallic iron by potassium superoxide leads to the formation of ferrate(V). Under room temperature this compound is unstable and instantly decomposes by disproportionation mechanism. Grinding the substance into powder accelerates the decomposition process.

  8. Variations in the microstructure and properties of Mn-Ti multiple-phase steel with high strength under different tempering temperatures

    NASA Astrophysics Data System (ADS)

    Li, Dazhao; Li, Xiaonan; Cui, Tianxie; Li, Jianmin; Wang, Yutian; Fu, Peimao

    2015-03-01

    There are few relevant researches on coils by tempering, and the variations of microstructure and properties of steel coil during the tempering process also remain unclear. By using thermo-mechanical control process(TMCP) technology, Mn-Ti typical HSLA steel coils with yield strength of 920 MPa are produced on the 2250 hot rolling production line. Then, the samples are taken from the coils and tempered at the temperatures of 220 °C, 350 °C, and 620 °C respectively. After tempering the strength, ductility and toughness of samples are tested, and meanwhile microstructures are investigated. Precipitates initially emerge inside the ferrite laths and the density of the dislocation drops. Then, the lath-shaped ferrites begin to gather, and the retained austenite films start to decompose. Finally, the retained austenite films are completely decomposed into coarse and short rod-shape precipitates composed of C and Ti compounds. The yield strength increases with increasing tempering temperature due to the pinning effect of the precipitates, and the dislocation density decreases. The yield strength is highest when the steel is tempered at 220 °C because of pinning of the precipitates to dislocations. The total elongation increases in all samples because of the development of ferrites during tempering. The tensile strength and impact absorbed energy decline because the effect of impeding crack propagation weakens as the retained austenite films completely decompose and the precipitates coarsen. This paper clarifies the influence of different tempering temperatures on phase transformation characteristics and process of Mn-Ti typical multiphase steels, as well as its resulting performance variation rules.

  9. Interplay between morphology and frequency in lexical access: The case of the base frequency effect

    PubMed Central

    Vannest, Jennifer; Newport, Elissa L.; Newman, Aaron J.; Bavelier, Daphne

    2011-01-01

    A major issue in lexical processing concerns storage and access of lexical items. Here we make use of the base frequency effect to examine this. Specifically, reaction time to morphologically complex words (words made up of base and suffix, e.g., agree+able) typically reflects frequency of the base element (i.e., total frequency of all words in which agree appears) rather than surface word frequency (i.e., frequency of agreeable itself). We term these complex words decomposable. However, a class of words termed whole-word do not show such sensitivity to base frequency (e.g., serenity). Using an event-related MRI design, we exploited the fact that processing low-frequency words increases BOLD activity relative to high frequency ones, and examined effects of base frequency on brain activity for decomposable and whole-word items. Morphologically complex words, half high and half low base frequency, were compared to matched high and low frequency simple monomorphemic words using a lexical decision task. Morphologically complex words increased activation in left inferior frontal and left superior temporal cortices versus simple words. The only area to mirror the behavioral distinction between decomposable and whole-word types was the thalamus. Surprisingly, most frequency-sensitive areas failed to show base frequency effects. This variety of responses to frequency and word type across brain areas supports an integrative view of multiple variables during lexical access, rather than a dichotomy between memory-based access and on-line computation. Lexical access appears best captured as interplay of several neural processes with different sensitivities to various linguistic factors including frequency and morphological complexity. PMID:21167136

  10. How to decompose arbitrary continuous-variable quantum operations.

    PubMed

    Sefi, Seckin; van Loock, Peter

    2011-10-21

    We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society

  11. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  12. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  13. Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.

    PubMed

    Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E

    2006-08-01

    A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.

  14. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  15. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  17. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  18. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  19. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  20. Hydrogen production from water using copper and barium hydroxide

    DOEpatents

    Bamberger, Carlos E.; Richardson, deceased, Donald M.

    1979-01-01

    A process for producing hydrogen comprises the step of reacting metallic Cu with Ba(OH).sub.2 in the presence of steam to produce hydrogen and BaCu.sub.2 O.sub.2. The BaCu.sub.2 O.sub.2 is reacted with H.sub.2 O to form Cu.sub.2 O and a Ba(OH).sub.2 product for recycle to the initial reaction step. Cu can be obtained from the Cu.sub.2 O product by several methods. In one embodiment the Cu.sub.2 O is reacted with HF solution to provide CuF.sub.2 and Cu. The CuF.sub.2 is reacted with H.sub.2 O to provide CuO and HF. CuO is decomposed to Cu.sub.2 O and O.sub.2. The HF, Cu and Cu.sub.2 O are recycled. In another embodiment the Cu.sub.2 O is reacted with aqueous H.sub.2 SO.sub.4 solution to provide CuSO.sub.4 solution and Cu. The CuSO.sub.4 is decomposed to CuO and SO.sub.3. The CuO is decomposed to form Cu.sub.2 O and O.sub.2. The SO.sub.3 is dissolved to form H.sub.2 SO.sub.4. H.sub.2 SO.sub.4, Cu and Cu.sub.2 O are recycled. In another embodiment Cu.sub.2 O is decomposed electrolytically to Cu and O.sub.2. In another aspect of the invention, Cu is recovered from CuO by the steps of decomposing CuO to Cu.sub.2 O and O.sub.2, reacting the Cu.sub.2 O with aqueous HF solution to produce Cu and CuF.sub.2, reacting the CuF.sub.2 with H.sub.2 O to form CuO and HF, and recycling the CuO and HF to previous reaction steps.

  1. Preparation of cermets

    DOEpatents

    Morgan, Chester S.

    1978-01-01

    Cermets are produced by the process of forming a physical mixture of a ceramic powder material with an elemental metal precursor compound and by decomposing the elemental metal precursor compound within the mixture. The decomposition step may be carried out either prior to or during a forming and densification step.

  2. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  3. [Drying characteristics and apparent change of sludge granules during drying].

    PubMed

    Ma, Xue-Wen; Weng, Huan-Xin; Zhang, Jin-Jun

    2011-08-01

    Three different weight grades of sludge granules (2.5, 5, 10 g) were dried at constant temperature of 100, 200, 300, 400 and 500 degrees C, respectively. Then characteristics of weight loss and change of apparent form during sludge drying were analyzed. Results showed that there were three stages during sludge drying at 100-200 degrees C: acceleration phase, constant-rate phase, and falling-rate phase. At 300-500 degrees C, there were no constant-rate phase, but due to lots of cracks generated at sludge surface, average drying rates were still high. There was a quadratic nonlinear relationship between average drying rate and drying temperature. At 100-200 degrees C, drying processes of different weight grade sludge granules were similar. At 300-500 degrees C, drying processes of same weight grade of sludge granules were similar. Little organic matter decomposed till sludge burning at 100-300 degrees C, while some organic matter began to decompose at the beginning of sludge drying at 400-500 degrees C.

  4. COMDECOM: predicting the lifetime of screening compounds in DMSO solution.

    PubMed

    Zitha-Bovens, Emrin; Maas, Peter; Wife, Dick; Tijhuis, Johan; Hu, Qian-Nan; Kleinöder, Thomas; Gasteiger, Johann

    2009-06-01

    The technological evolution of the 1990s in both combinatorial chemistry and high-throughput screening created the demand for rapid access to the compound deck to support the screening process. The common strategy within the pharmaceutical industry is to store the screening library in DMSO solution. Several studies have shown that a percentage of these compounds decompose in solution, varying from a few percent of the total to a substantial part of the library. In the COMDECOM (COMpound DECOMposition) project, the compound stability of screening compounds in DMSO solution is monitored in an accelerated thermal, hydrolytic, and oxidative decomposition program. A large database with stability data is collected, and from this database, a predictive model is being developed. The aim of this program is to build an algorithm that can flag compounds that are likely to decompose-information that is considered to be of utmost importance (e.g., in the compound acquisition process and when evaluation screening results of library compounds, as well as in the determination of optimal storage conditions).

  5. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  6. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  7. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  8. Assembly planning based on subassembly extraction

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Shin, Yeong Gil

    1990-01-01

    A method is presented for the automatic determination of assembly partial orders from a liaison graph representation of an assembly through the extraction of preferred subassemblies. In particular, the authors show how to select a set of tentative subassemblies by decomposing a liaison graph into a set of subgraphs based on feasibility and difficulty of disassembly, how to evaluate each of the tentative subassemblies in terms of assembly cost using the subassembly selection indices, and how to construct a hierarchical partial order graph (HPOG) as an assembly plan. The method provides an approach to assembly planning by identifying spatial parallelism in assembly as a means of constructing temporal relationships among assembly operations and solves the problem of finding a cost-effective assembly plan in a flexible environment. A case study of the assembly planning of a mechanical assembly is presented.

  9. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  10. RPYFMM: Parallel adaptive fast multipole method for Rotne-Prager-Yamakawa tensor in biomolecular hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.

    2018-06-01

    RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.

  11. Microstructure of In x Ga1-x N nanorods grown by molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Webster, R. F.; Soundararajah, Q. Y.; Griffiths, I. J.; Cherns, D.; Novikov, S. V.; Foxon, C. T.

    2015-11-01

    Transmission electron microscopy is used to examine the structure and composition of In x Ga1-x N nanorods grown by plasma-assisted molecular beam epitaxy. The results confirm a core-shell structure with an In-rich core and In-poor shell resulting from axial and lateral growth sectors respectively. Atomic resolution mapping by energy-dispersive x-ray microanalysis and high angle annular dark field imaging show that both the core and the shell are decomposed into Ga-rich and In-rich platelets parallel to their respective growth surfaces. It is argued that platelet formation occurs at the surfaces, through the lateral expansion of surface steps. Studies of nanorods with graded composition show that decomposition ceases for x ≥ 0.8 and the ratio of growth rates, shell:core, decreases with increasing In concentration.

  12. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  13. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a single function for calculating the Voigt in-band transmittance, and subsequently for integration into the re-architectured MODTRAN6 code. Our overall objective is that by combining the GPU processing with more efficient Plume Tracker retrieval algorithms, a 100-fold increase in the computational speed will be realized. Since the Plume Tracker runs on Windows-based platforms, the GPU-enhanced MODTRAN6 will be packaged as a DLL. We do however anticipate that the accelerated option will be made available to the general MODTRAN community through an application programming interface (API).

  14. [Spectral characteristics of dissolved organic matter released during the metabolic process of small medusa].

    PubMed

    Guo, Dong-Hui; Yi, Yue-Yuan; Zhao, Lei; Guo, Wei-Dong

    2012-06-01

    The metabolic processes of jellyfish can produce dissolved organic matter (DOM) which will influence the functioning of the aquatic ecosystems, yet the optical properties of DOM released by jellyfish are unknown. Here we report the absorption and fluorescence properties of DOM released by a medusa species Black fordia virginica during a 24 h incubation experiment. Compared with the control group, an obvious increase in the concentrations of dissolved organic carbon (DOC), absorption coefficient (a280) and total dissolved nitrogen (TDN) was observed in incubation group. This clearly demonstrated the release of DOM, chromophoric DOM (CDOM) and dissolved nutrients by B. virginica which feed on enough of Artemia sp. before the experiment. The increase in spectral slope ratio (SR) and decrease in humification index (HIX) indicated that the released DOM was less-humified and had relatively lower molecular weight. Parallel factor analysis (PARAFAC) decomposed the fluorescence matrices of DOM into three humic-like components (C1-C3) and one protein-like component (C4). The Fmax of two components (C2: < 250, 295/386 nm; C4: 275/334 nm) with the emission wavelength < 400 nm increased significantly during the metabolic process of B. virginica. However, the Fmax of the other two components with the emission wavelength > 400 nm showed little changes. Thus, we suggested a zooplankton index (ZIX) to trace and characterize the DOM excreted by metabolic activity of zooplankton, which is calculated as the ratio of the sum of Fmax of all fluorescence components with the emission wavelength < 400 nm to the sum of Fmax of the other components with the emission wavelength > 400 nm.

  15. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  16. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  17. Parallel log structured file system collective buffering to achieve a compact representation of scientific and/or dimensional data

    DOEpatents

    Grider, Gary A.; Poole, Stephen W.

    2015-09-01

    Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.

  18. schwimmbad: A uniform interface to parallel processing pools in Python

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  19. Carboxylic acid sorption regeneration process

    DOEpatents

    King, C. Judson; Poole, Loree J.

    1995-01-01

    Carboxylic acids are sorbed from aqueous feedstocks into an organic liquid phase or onto a solid adsorbent. The acids are freed from the sorbent phase by treating it with aqueous alkylamine thus forming an alkylammonium carboxylate which is dewatered and decomposed to the desired carboxylic acid and the alkylamine.

  20. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  1. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  2. Connectionism, parallel constraint satisfaction processes, and gestalt principles: (re) introducing cognitive dynamics to social psychology.

    PubMed

    Read, S J; Vanman, E J; Miller, L C

    1997-01-01

    We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.

  3. Ecosystem and decomposer effects on litter dynamics along an old field to old-growth forest successional gradient

    NASA Astrophysics Data System (ADS)

    Mayer, Paul M.

    2008-03-01

    Identifying the biotic (e.g. decomposers, vegetation) and abiotic (e.g. temperature, moisture) mechanisms controlling litter decomposition is key to understanding ecosystem function, especially where variation in ecosystem structure due to successional processes may alter the strength of these mechanisms. To identify these controls and feedbacks, I measured mass loss and N flux in herbaceous, leaf, and wood litter along a successional gradient of ecosystem types (old field, transition forest, old-growth forest) while manipulating detritivore access to litter. Ecosystem type, litter type, and decomposers contributed directly and interactively to decomposition. Litter mass loss and N accumulation was higher while litter C:N remained lower in old-growth forests than in either old fields or transition forest. Old-growth forests influenced litter dynamics via microclimate (coolest and wettest) but also, apparently, through a decomposer community adapted to consuming the large standing stocks of leaf litter, as indicated by rapid leaf litter loss. In all ecosystem types, mass loss of herbaceous litter was greater than leaf litter which, in turn was greater than wood. However, net N loss from wood litter was faster than expected, suggesting localized N flux effects of wood litter. Restricting detritivore access to litter reduced litter mass loss and slowed the accumulation of N in litter, suggesting that macro-detritivores affect both physical and chemical characteristics of litter through selective grazing. These data suggest that the distinctive litter loss rates and efficient N cycling observed in old-growth forest ecosystems are not likely to be realized soon after old fields are restored to forested ecosystems.

  4. Using Parallel Processing for Problem Solving.

    DTIC Science & Technology

    1979-12-01

    are the basic parallel proces- sing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities...Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context FOM -(over "*’ DD I FON 1473 ’EDITION OF I...arc the basic parallel processing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities. Language

  5. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  6. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  7. Applying graph partitioning methods in measurement-based dynamic load balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less

  8. A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.

    2007-04-01

    In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less

  9. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  10. Limited Effects of Variable-Retention Harvesting on Fungal Communities Decomposing Fine Roots in Coastal Temperate Rainforests.

    PubMed

    Philpott, Timothy J; Barker, Jason S; Prescott, Cindy E; Grayston, Sue J

    2018-02-01

    Fine root litter is the principal source of carbon stored in forest soils and a dominant source of carbon for fungal decomposers. Differences in decomposer capacity between fungal species may be important determinants of fine-root decomposition rates. Variable-retention harvesting (VRH) provides refuge for ectomycorrhizal fungi, but its influence on fine-root decomposers is unknown, as are the effects of functional shifts in these fungal communities on carbon cycling. We compared fungal communities decomposing fine roots (in litter bags) under VRH, clear-cut, and uncut stands at two sites (6 and 13 years postharvest) and two decay stages (43 days and 1 year after burial) in Douglas fir forests in coastal British Columbia, Canada. Fungal species and guilds were identified from decomposed fine roots using high-throughput sequencing. Variable retention had short-term effects on β-diversity; harvest treatment modified the fungal community composition at the 6-year-postharvest site, but not at the 13-year-postharvest site. Ericoid and ectomycorrhizal guilds were not more abundant under VRH, but stand age significantly structured species composition. Guild composition varied by decay stage, with ruderal species later replaced by saprotrophs and ectomycorrhizae. Ectomycorrhizal abundance on decomposing fine roots may partially explain why fine roots typically decompose more slowly than surface litter. Our results indicate that stand age structures fine-root decomposers but that decay stage is more important in structuring the fungal community than shifts caused by harvesting. The rapid postharvest recovery of fungal communities decomposing fine roots suggests resiliency within this community, at least in these young regenerating stands in coastal British Columbia. IMPORTANCE Globally, fine roots are a dominant source of carbon in forest soils, yet the fungi that decompose this material and that drive the sequestration or respiration of this carbon remain largely uncharacterized. Fungi vary in their capacity to decompose plant litter, suggesting that fungal community composition is an important determinant of decomposition rates. Variable-retention harvesting is a forestry practice that modifies fungal communities by providing refuge for ectomycorrhizal fungi. We evaluated the effects of variable retention and clear-cut harvesting on fungal communities decomposing fine roots at two sites (6 and 13 years postharvest), at two decay stages (43 days and 1 year), and in uncut stands in temperate rainforests. Harvesting impacts on fungal community composition were detected only after 6 years after harvest. We suggest that fungal community composition may be an important factor that reduces fine-root decomposition rates relative to those of above-ground plant litter, which has important consequences for forest carbon cycling. Copyright © 2018 American Society for Microbiology.

  11. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  12. Do SiO 2 and carbon-doped SiO 2 nanoparticles melt? Insights from QM/MD simulations and ramifications regarding carbon nanotube growth

    NASA Astrophysics Data System (ADS)

    Page, Alister J.; Chandrakumar, K. R. S.; Irle, Stephan; Morokuma, Keiji

    2011-05-01

    Quantum chemical molecular dynamics (QM/MD) simulations of pristine and carbon-doped SiO 2 nanoparticles have been performed between 1000 and 3000 K. At temperatures above 1600 K, pristine nanoparticle SiO 2 decomposes rapidly, primarily forming SiO. Similarly, carbon-doped nanoparticle SiO 2 decomposes at temperatures above 2000 K, primarily forming SiO and CO. Analysis of the physical states of these pristine and carbon-doped SiO 2 nanoparticles indicate that they remain in the solid phase throughout decomposition. This process is therefore one of sublimation, as the liquid phase is never entered. Ramifications of these observations with respect to presently debated mechanisms of carbon nanotube growth on SiO 2 nanoparticles will be discussed.

  13. Process for depositing Cr-bearing layer

    DOEpatents

    Ellis, Timothy W.; Lograsso, Thomas A.; Eshelman, Mark A.

    1995-05-09

    A method of applying a Cr-bearing layer to a substrate, comprises introducing an organometallic compound, in vapor or solid powder form entrained in a carrier gas to a plasma of an inductively coupled plasma torch or device to thermally decompose the organometallic compound and contacting the plasma and the substrate to be coated so as to deposit the Cr-bearing layer on the substrate. A metallic Cr, Cr alloy or Cr compound such as chromium oxide, nitride and carbide can be provided on the substrate. Typically, the organometallic compound is introduced to an inductively coupled plasma torch that is disposed in ambient air so to thermally decompose the organometallic compound in the plasma. The plasma is directed at the substrate to deposit the Cr-bearing layer or coating on the substrate.

  14. Process for depositing Cr-bearing layer

    DOEpatents

    Ellis, T.W.; Lograsso, T.A.; Eshelman, M.A.

    1995-05-09

    A method of applying a Cr-bearing layer to a substrate, comprises introducing an organometallic compound, in vapor or solid powder form entrained in a carrier gas to a plasma of an inductively coupled plasma torch or device to thermally decompose the organometallic compound and contacting the plasma and the substrate to be coated so as to deposit the Cr-bearing layer on the substrate. A metallic Cr, Cr alloy or Cr compound such as chromium oxide, nitride and carbide can be provided on the substrate. Typically, the organometallic compound is introduced to an inductively coupled plasma torch that is disposed in ambient air so to thermally decompose the organometallic compound in the plasma. The plasma is directed at the substrate to deposit the Cr-bearing layer or coating on the substrate. 7 figs.

  15. Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control

    NASA Technical Reports Server (NTRS)

    Bernstein, Daniel S.; Zilberstein, Shlomo

    2003-01-01

    Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.

  16. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  17. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  18. Decomposing Bias in Different Types of Simple Decisions

    ERIC Educational Resources Information Center

    White, Corey N.; Poldrack, Russell A.

    2014-01-01

    The ability to adjust bias, or preference for an option, allows for great behavioral flexibility. Decision bias is also important for understanding cognition as it can provide useful information about underlying cognitive processes. Previous work suggests that bias can be adjusted in 2 primary ways: by adjusting how the stimulus under…

  19. Decomposing Task-Switching Costs with the Diffusion Model

    ERIC Educational Resources Information Center

    Schmitz, Florian; Voss, Andreas

    2012-01-01

    In four experiments, task-switching processes were investigated with variants of the alternating runs paradigm and the explicit cueing paradigm. The classical diffusion model for binary decisions (Ratcliff, 1978) was used to dissociate different components of task-switching costs. Findings can be reconciled with the view that task-switching…

  20. How Things Work. Teacher's Guide.

    ERIC Educational Resources Information Center

    Brown, Mark; And Others

    This unit examines the earth's processes and systems from an energy perspective. A technical language for discussion of energy systems is developed. Objectives include the ability of students to discuss earth's carbon/oxygen cycle, hydrological cycle, and heat patterns and the functioning of producers, consumers and decomposers in the environment.…

  1. An integrated spectroscopic and wet chemical approach to investigate grass litter decomposition chemistry

    USDA-ARS?s Scientific Manuscript database

    Litter decomposition is a key process for soil organic matter formation and terrestrial biogeochemistry. Yet we still lack complete understanding of the chemical transformations which occur in the litter residue as it decomposes. A number of methods such as bulk nutrient concentrations, chemical fra...

  2. Morphological Decomposition Based on the Analysis of Orthography

    ERIC Educational Resources Information Center

    Rastle, Kathleen; Davis, Matthew H.

    2008-01-01

    Recent theories of morphological processing have been dominated by the notion that morphologically complex words are decomposed into their constituents on the basis of their semantic properties. In this article we argue that the weight of evidence now suggests that the recognition of morphologically complex words begins with a rapid morphemic…

  3. Carboxylic acid sorption regeneration process

    DOEpatents

    King, C.J.; Poole, L.J.

    1995-05-02

    Carboxylic acids are sorbed from aqueous feedstocks into an organic liquid phase or onto a solid adsorbent. The acids are freed from the sorbent phase by treating it with aqueous alkylamine thus forming an alkylammonium carboxylate which is dewatered and decomposed to the desired carboxylic acid and the alkylamine. 10 figs.

  4. Air-stable ink for scalable, high-throughput layer deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, Benjamin D; Connor, Stephen T; Cui, Yi

    A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.

  5. 9 CFR 590.539 - Defrosting operations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 590.539 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG... objectionable odors and are unfit for human food (e.g., sour, musty, fermented, or decomposed odors) shall be.... Defrosted liquid shall not be held more than 16 hours prior to processing or drying. (e) Sanitary methods...

  6. Rapid low-temperature epitaxial growth using a hot-element assisted chemical vapor deposition process

    DOEpatents

    Iwancizko, Eugene; Jones, Kim M.; Crandall, Richard S.; Nelson, Brent P.; Mahan, Archie Harvin

    2001-01-01

    The invention provides a process for depositing an epitaxial layer on a crystalline substrate, comprising the steps of providing a chamber having an element capable of heating, introducing the substrate into the chamber, heating the element at a temperature sufficient to decompose a source gas, passing the source gas in contact with the element; and forming an epitaxial layer on the substrate.

  7. PROCESS FOR PREPARING URANIUM METAL

    DOEpatents

    Prescott, C.H. Jr.; Reynolds, F.L.

    1959-01-13

    A process is presented for producing oxygen-free uranium metal comprising contacting iodine vapor with crude uranium in a reaction zone maintained at 400 to 800 C to produce a vaporous mixture of UI/sub 4/ and iodine. Also disposed within the maction zone is a tungsten filament which is heated to about 1600 C. The UI/sub 4/, upon contacting the hot filament, is decomposed to molten uranium substantially free of oxygen.

  8. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-01-01

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  9. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-02-02

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  10. Search asymmetries: parallel processing of uncertain sensory information.

    PubMed

    Vincent, Benjamin T

    2011-08-01

    What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. 77 FR 47573 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(2)(E)(ii) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...

  12. Double Take: Parallel Processing by the Cerebral Hemispheres Reduces Attentional Blink

    ERIC Educational Resources Information Center

    Scalf, Paige E.; Banich, Marie T.; Kramer, Arthur F.; Narechania, Kunjan; Simon, Clarissa D.

    2007-01-01

    Recent data have shown that parallel processing by the cerebral hemispheres can expand the capacity of visual working memory for spatial locations (J. F. Delvenne, 2005) and attentional tracking (G. A. Alvarez & P. Cavanagh, 2005). Evidence that parallel processing by the cerebral hemispheres can improve item identification has remained elusive.…

  13. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  14. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  15. Our World without Decomposers: How Scary!

    ERIC Educational Resources Information Center

    Spring, Patty; Harr, Natalie

    2014-01-01

    Bugs, slugs, bacteria, and fungi are decomposers at the heart of every ecosystem. Fifth graders at Dodge Intermediate School in Twinsburg, Ohio, ventured outdoors to learn about the necessity of these amazing organisms. With the help of a naturalist, students explored their local park and discovered the wonder of decomposers and their…

  16. Finding the lost open-circuit voltage in polymer solar cells by UV-ozone treatment of the nickel acetate anode buffer layer.

    PubMed

    Wang, Fuzhi; Sun, Gang; Li, Cong; Liu, Jiyan; Hu, Siqian; Zheng, Hua; Tan, Zhan'ao; Li, Yongfang

    2014-06-25

    Efficient polymer solar cells (PSCs) with enhanced open-circuit voltage (Voc) are fabricated by introducing solution-processed and UV-ozone (UVO)-treated nickel acetate (O-NiAc) as an anode buffer layer. According to X-ray photoelectron spectroscopy data, NiAc partially decomposed to NiOOH during the UVO treatment. NiOOH is a dipole species, which leads to an increase in the work function (as confirmed by ultraviolet photoemission spectroscopy), thus benefitting the formation of ohmic contact between the anode and photoactive layer and leading to increased Voc. In addition, the UVO treatment improves the wettability between the substrate and solvent of the active layer, which facilitates the formation of an upper photoactive layer with better morphology. Further, the O-NiAc layer can decrease the series resistance (Rs) and increase the parallel resistance (Rp) of the devices, inducing enhanced Voc in comparison with the as-prepared NiAc-buffered control devices without UVO treatment. For PSCs based on the P3HT:PCBM system, Voc increases from 0.50 to 0.60 V after the NiAc buffer layer undergoes UVO treatment. Similarly, in the P3HT:ICBA system, the Voc value of the device with a UVO-treated NiAc buffer layer increases from 0.78 to 0.88 V, showing an enhanced power conversion efficiency of 6.64%.

  17. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  18. Traits determining the digestibility-decomposability relationships in species from Mediterranean rangelands.

    PubMed

    Bumb, Iris; Garnier, Eric; Coq, Sylvain; Nahmani, Johanne; Del Rey Granado, Maria; Gimenez, Olivier; Kazakou, Elena

    2018-03-05

    Forage quality for herbivores and litter quality for decomposers are two key plant properties affecting ecosystem carbon and nutrient cycling. Although there is a positive relationship between palatability and decomposition, very few studies have focused on larger vertebrate herbivores while considering links between the digestibility of living leaves and stems and the decomposability of litter and associated traits. The hypothesis tested is that some defences of living organs would reduce their digestibility and, as a consequence, their litter decomposability, through 'afterlife' effects. Additionally in high-fertility conditions the presence of intense herbivory would select for communities dominated by fast-growing plants, which are able to compensate for tissue loss by herbivory, producing both highly digestible organs and easily decomposable litter. Relationships between dry matter digestibility and decomposability were quantified in 16 dominant species from Mediterranean rangelands, which are subject to management regimes that differ in grazing intensity and fertilization. The digestibility and decomposability of leaves and stems were estimated at peak standing biomass, in plots that were either fertilized and intensively grazed or unfertilized and moderately grazed. Several traits were measured on living and senesced organs: fibre content, dry matter content and nitrogen, phosphorus and tannin concentrations. Digestibility was positively related to decomposability, both properties being influenced in the same direction by management regime, organ and growth forms. Digestibility of leaves and stems was negatively related to their fibre concentrations, and positively related to their nitrogen concentration. Decomposability was more strongly related to traits measured on living organs than on litter. Digestibility and decomposition were governed by similar structural traits, in particular fibre concentration, affecting both herbivores and micro-organisms through the afterlife effects. This study contributes to a better understanding of the interspecific relationships between forage quality and litter decomposition in leaves and stems and demonstrates the key role these traits play in the link between plant and soil via herbivory and decomposition. Fibre concentration and dry matter content can be considered as good predictors of both digestibility and decomposability. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  20. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  3. Recent Changes in Global Photosynthesis and Terrestrial Ecosystem Respiration Constrained From Multiple Observations

    NASA Astrophysics Data System (ADS)

    Li, Wei; Ciais, Philippe; Wang, Yilong; Yin, Yi; Peng, Shushi; Zhu, Zaichun; Bastos, Ana; Yue, Chao; Ballantyne, Ashley P.; Broquet, Grégoire; Canadell, Josep G.; Cescatti, Alessandro; Chen, Chi; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Piao, Shilong

    2018-01-01

    To assess global carbon cycle variability, we decompose the net land carbon sink into the sum of gross primary productivity (GPP), terrestrial ecosystem respiration (TER), and fire emissions and apply a Bayesian framework to constrain these fluxes between 1980 and 2014. The constrained GPP and TER fluxes show an increasing trend of only half of the prior trend simulated by models. From the optimization, we infer that TER increased in parallel with GPP from 1980 to 1990, but then stalled during the cooler periods, in 1990-1994 coincident with the Pinatubo eruption, and during the recent warming hiatus period. After each of these TER stalling periods, TER is found to increase faster than GPP, explaining a relative reduction of the net land sink. These results shed light on decadal variations of GPP and TER and suggest that they exhibit different responses to temperature anomalies over the last 35 years.

  4. Reverse color sequence in the diffraction of white light by the wing of the male butterfly Pierella luna (Nymphalidae: Satyrinae)

    NASA Astrophysics Data System (ADS)

    Vigneron, Jean Pol; Simonis, Priscilla; Aiello, Annette; Bay, Annick; Windsor, Donald M.; Colomer, Jean-François; Rassart, Marie

    2010-08-01

    The butterfly Pierella luna (Nymphalidae) shows an intriguing rainbow iridescence effect: the forewings of the male, when illuminated along the axis from the body to the wing tip, decompose a white light beam as a diffraction grating would do. Violet light, however, emerges along a grazing angle, near the wing surface, while the other colors, from blue to red, exit respectively at angles progressively closer to the direction perpendicular to the wing plane. This sequence is the reverse of the usual decomposition of light by a grating with a periodicity parallel to the wing surface. It is shown that this effect is produced by a macroscopic deformation of the entire scale, which curls in such a way that it forms a “vertical” grating, perpendicular to the wing surface, and functions in transmission instead of reflection.

  5. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  6. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  7. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  8. Integrated boiler, superheater, and decomposer for sulfuric acid decomposition

    DOEpatents

    Moore, Robert [Edgewood, NM; Pickard, Paul S [Albuquerque, NM; Parma, Jr., Edward J.; Vernon, Milton E [Albuquerque, NM; Gelbard, Fred [Albuquerque, NM; Lenard, Roger X [Edgewood, NM

    2010-01-12

    A method and apparatus, constructed of ceramics and other corrosion resistant materials, for decomposing sulfuric acid into sulfur dioxide, oxygen and water using an integrated boiler, superheater, and decomposer unit comprising a bayonet-type, dual-tube, counter-flow heat exchanger with a catalytic insert and a central baffle to increase recuperation efficiency.

  9. Procedures for Decomposing a Redox Reaction into Half-Reaction

    ERIC Educational Resources Information Center

    Fishtik, Ilie; Berka, Ladislav H.

    2005-01-01

    A simple algorithm for a complete enumeration of the possible ways a redox reaction (RR) might be uniquely decomposed into half-reactions (HRs) using the response reactions (RERs) formalism is presented. A complete enumeration of the possible ways a RR may be decomposed into HRs is equivalent to a complete enumeration of stoichiometrically…

  10. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  11. The Processes Involved in Designing Software.

    DTIC Science & Technology

    1980-08-01

    repeats Itself at the next level, terminating with a plan whose individual steps can be executed to solve the Initial problem. Hayes-Roth and Hayes-Roth...that the original design problem is decomposed into a collection of well structured subproblems under the control of some type of executive process...given element to refine further, the schema is assumed to execute to completion, developing a solution model for that element and refining it into a

  12. Process for making surfactant capped metal oxide nanocrystals, and products produced by the process

    DOEpatents

    Alivisatos, A. Paul; Rockenberger, Joerg

    2006-01-10

    Disclosed is a process for making surfactant capped nanocrystals of metal oxides which are dispersable in organic solvents. The process comprises decomposing a metal cupferron complex of the formula MXCupX, wherein M is a metal, and Cup is a N-substituted N-Nitroso hydroxylamine, in the presence of a coordinating surfactant, the reaction being conducted at a temperature ranging from about 150 to about 400.degree. C., for a period of time sufficient to complete the reaction. Also disclosed are compounds made by the process.

  13. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  14. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  15. Plant diversity does not buffer drought effects on early-stage litter mass loss rates and microbial properties.

    PubMed

    Vogel, Anja; Eisenhauer, Nico; Weigelt, Alexandra; Scherer-Lorenzen, Michael

    2013-09-01

    Human activities are decreasing biodiversity and changing the climate worldwide. Both global change drivers have been shown to affect ecosystem functioning, but they may also act in concert in a non-additive way. We studied early-stage litter mass loss rates and soil microbial properties (basal respiration and microbial biomass) during the summer season in response to plant species richness and summer drought in a large grassland biodiversity experiment, the Jena Experiment, Germany. In line with our expectations, decreasing plant diversity and summer drought decreased litter mass loss rates and soil microbial properties. In contrast to our hypotheses, however, this was only true for mass loss of standard litter (wheat straw) used in all plots, and not for plant community-specific litter mass loss. We found no interactive effects between global change drivers, that is, drought reduced litter mass loss rates and soil microbial properties irrespective of plant diversity. High mass loss rates of plant community-specific litter and low responsiveness to drought relative to the standard litter indicate that soil microbial communities were adapted to decomposing community-specific plant litter material including lower susceptibility to dry conditions during summer months. Moreover, higher microbial enzymatic diversity at high plant diversity may have caused elevated mass loss of standard litter. Our results indicate that plant diversity loss and summer drought independently impede soil processes. However, soil decomposer communities may be highly adapted to decomposing plant community-specific litter material, even in situations of environmental stress. Results of standard litter mass loss moreover suggest that decomposer communities under diverse plant communities are able to cope with a greater variety of plant inputs possibly making them less responsive to biotic changes. © 2013 John Wiley & Sons Ltd.

  16. Widening and Deepening Questions in Web-Based Investigative Learning

    ERIC Educational Resources Information Center

    Kashihara, Akihiro; Akiyama, Naoto

    2016-01-01

    Web allows learners to investigate any question with a great variety of Web resources, in which they could construct a wider, and deeper knowledge. In such investigative learning process, it is important for them to deepen and widen the question, which involves decomposing the question into the sub-questions to be further investigated. This…

  17. Soil fauna and plant litter decomposition in tropical and subalpine forests

    Treesearch

    G. Gonzalez; T.R. Seastedt

    2001-01-01

    The decomposition of plant residues is influenced by their chemical composition, the physical-chemical environment, and the decomposer organisms. Most studies interested in latitudinal gradients of decomposition have focused on substrate quality and climate effects on decomposition, and have excluded explicit recognition of the soil organisms involved in the process....

  18. Opaque for the Reader but Transparent for the Brain: Neural Signatures of Morphological Complexity

    ERIC Educational Resources Information Center

    Meinzer, Marcus; Lahiri, Aditi; Flaisch, Tobias; Hannemann, Ronny; Eulitz, Carsten

    2009-01-01

    Within linguistics, words with a complex internal structure are commonly assumed to be decomposed into their constituent morphemes (e.g., un-help-ful). Nevertheless, an ongoing debate concerns the brain structures that subserve this process. Using functional magnetic resonance imaging, the present study varied the internal complexity of derived…

  19. Hydrogen production by the decomposition of water

    DOEpatents

    Hollabaugh, Charles M.; Bowman, Melvin G.

    1981-01-01

    How to produce hydrogen from water was a problem addressed by this invention. The solution employs a combined electrolytical-thermochemical sulfuric acid process. Additionally, high purity sulfuric acid can be produced in the process. Water and SO.sub.2 react in electrolyzer (12) so that hydrogen is produced at the cathode and sulfuric acid is produced at the anode. Then the sulfuric acid is reacted with a particular compound M.sub.r X.sub.s so as to form at least one water insoluble sulfate and at least one water insoluble oxide of molybdenum, tungsten, or boron. Water is removed by filtration; and the sulfate is decomposed in the presence of the oxide in sulfate decomposition zone (21), thus forming SO.sub.3 and reforming M.sub.r X.sub.s. The M.sub.r X.sub.s is recycled to sulfate formation zone (16). If desired, the SO.sub.3 can be decomposed to SO.sub.2 and O.sub.2 ; and the SO.sub.2 can be recycled to electrolyzer (12) to provide a cycle for producing hydrogen.

  20. High-quality AlN grown on a thermally decomposed sapphire surface

    NASA Astrophysics Data System (ADS)

    Hagedorn, S.; Knauer, A.; Brunner, F.; Mogilatenko, A.; Zeimer, U.; Weyers, M.

    2017-12-01

    In this study we show how to realize a self-assembled nano-patterned sapphire surface on 2 inch diameter epi-ready wafer and the subsequent AlN overgrowth both in the same metal-organic vapor phase epitaxial process. For this purpose in-situ annealing in H2 environment was applied prior to AlN growth to thermally decompose the c-plane oriented sapphire surface. By proper AlN overgrowth management misoriented grains that start to grow on non c-plane oriented facets of the roughened sapphire surface could be overcome. We achieved crack-free, atomically flat AlN layers of 3.5 μm thickness. The layers show excellent material quality homogeneously over the whole wafer as proved by the full width at half maximum of X-ray measured ω-rocking curves of 120 arcsec to 160 arcsec for the 002 reflection and 440 arcsec to 550 arcsec for the 302 reflection. The threading dislocation density is 2 ∗ 109 cm-2 which shows that the annealing and overgrowth process investigated in this work leads to cost-efficient AlN templates for UV LED devices.

  1. "Going to town": Large-scale norming and statistical analysis of 870 American English idioms.

    PubMed

    Bulkes, Nyssa Z; Tanner, Darren

    2017-04-01

    An idiom is classically defined as a formulaic sequence whose meaning is comprised of more than the sum of its parts. For this reason, idioms pose a unique problem for models of sentence processing, as researchers must take into account how idioms vary and along what dimensions, as these factors can modulate the ease with which an idiomatic interpretation can be activated. In order to help ensure external validity and comparability across studies, idiom research benefits from the availability of publicly available resources reporting ratings from a large number of native speakers. Resources such as the one outlined in the current paper facilitate opportunities for consensus across studies on idiom processing and help to further our goals as a research community. To this end, descriptive norms were obtained for 870 American English idioms from 2,100 participants along five dimensions: familiarity, meaningfulness, literal plausibility, global decomposability, and predictability. Idiom familiarity and meaningfulness strongly correlated with one another, whereas familiarity and meaningfulness were positively correlated with both global decomposability and predictability. Correlations with previous norming studies are also discussed.

  2. Function of terahertz spectra in monitoring the decomposing process of biological macromolecules and in investigating the causes of photoinhibition.

    PubMed

    Qu, Yuangang; Zhang, Shuai; Lian, Yuji; Kuang, Tingyun

    2017-03-01

    Chlorophyll a and β-carotene play an important role in harvesting light energy, which is used to drive photosynthesis in plants. In this study, terahertz (THz) and visible range spectra of chlorophyll a and β-carotene and their changes under light treatment were investigated. The results show that the all THz transmission and absorption spectra of chlorophyll a and β-carotene changed upon light treatment, with the maximum changes at 15 min of illumination indicating the greatest changes of the collective vibrational mode of chlorophyll a and β-carotene. The absorption spectra of chlorophyll a in the visible light region decreased upon light treatment, signifying the degradation of chlorophyll a molecules. It can be inferred from these results that the THz spectra are very sensitive in monitoring the changes of the collective vibrational mode, despite the absence of changes in molecular configuration. The THz spectra can therefore be used to monitor the decomposing process of biological macromolecules; however, visible absorption spectra can only be used to monitor the breakdown extent of biological macromolecules.

  3. Semiconductor laser self-mixing micro-vibration measuring technology based on Hilbert transform

    NASA Astrophysics Data System (ADS)

    Tao, Yufeng; Wang, Ming; Xia, Wei

    2016-06-01

    A signal-processing synthesizing Wavelet transform and Hilbert transform is employed to measurement of uniform or non-uniform vibrations in self-mixing interferometer on semiconductor laser diode with quantum well. Background noise and fringe inclination are solved by decomposing effect, fringe counting is adopted to automatic determine decomposing level, a couple of exact quadrature signals are produced by Hilbert transform to extract vibration. The tempting potential of real-time measuring micro vibration with high accuracy and wide dynamic response bandwidth using proposed method is proven by both simulation and experiment. Advantages and error sources are presented as well. Main features of proposed semiconductor laser self-mixing interferometer are constant current supply, high resolution, simplest optical path and much higher tolerance to feedback level than existing self-mixing interferometers, which is competitive for non-contact vibration measurement.

  4. Experimentally simulated global warming and nitrogen enrichment effects on microbial litter decomposers in a marsh.

    PubMed

    Flury, Sabine; Gessner, Mark O

    2011-02-01

    Atmospheric warming and increased nitrogen deposition can lead to changes of microbial communities with possible consequences for biogeochemical processes. We used an enclosure facility in a freshwater marsh to assess the effects on microbes associated with decomposing plant litter under conditions of simulated climate warming and pulsed nitrogen supply. Standard batches of litter were placed in coarse-mesh and fine-mesh bags and submerged in a series of heated, nitrogen-enriched, and control enclosures. They were retrieved later and analyzed for a range of microbial parameters. Fingerprinting profiles obtained by denaturing gradient gel electrophoresis (DGGE) indicated that simulated global warming induced a shift in bacterial community structure. In addition, warming reduced fungal biomass, whereas bacterial biomass was unaffected. The mesh size of the litter bags and sampling date also had an influence on bacterial community structure, with the apparent number of dominant genotypes increasing from spring to summer. Microbial respiration was unaffected by any treatment, and nitrogen enrichment had no clear effect on any of the microbial parameters considered. Overall, these results suggest that microbes associated with decomposing plant litter in nutrient-rich freshwater marshes are resistant to extra nitrogen supplies but are likely to respond to temperature increases projected for this century.

  5. Reactivity of a Thick BaO Film Supported on Pt(111): Adsorption and Reaction of NO2, H2O and CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudiyanselage, Kumudu; Yi, Cheol-Woo W.; Szanyi, Janos

    2009-09-15

    Reactions of NO2, H2O, and CO2 with a thick (> 20 MLE) BaO film supported on Pt(111) were studied with temperature programmed desorption (TPD) and X-ray photoelectron spectroscopy (XPS). NO2 reacts with a thick BaO to form surface nitrite-nitrate ion pairs at 300 K, while only nitrates form at 600 K. In the thermal decomposition process of nitrite–nitrate ion pairs, first nitrites decompose and desorb as NO. Then nitrates decompose in two steps : at lower temperature with the release of NO2 and at higher temperature, nitrates dissociate to NO + O2. The thick BaO layer converts completely to Ba(OH)2more » following the adsorption of H2O at 300 K. Dehydration/dehydroxylation of this hydroxide layer can be fully achieved by annealing to 550 K. CO2 also reacts with BaO to form BaCO3 that completely decomposes to regenerate BaO upon annealing to 825 K. However, the thick BaO film cannot be converted completely to Ba(NOx)2 or BaCO3 under the experimental conditions employed in this study.« less

  6. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  7. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  8. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  9. Enzyme Activities at Different Stages of Plant Biomass Decomposition in Three Species of Fungus-Growing Termites

    PubMed Central

    Pedersen, Kristine S. K.; Aanen, Duur K.

    2017-01-01

    ABSTRACT Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ. Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. PMID:29269491

  10. Enzyme Activities at Different Stages of Plant Biomass Decomposition in Three Species of Fungus-Growing Termites.

    PubMed

    da Costa, Rafael R; Hu, Haofu; Pilgaard, Bo; Vreeburg, Sabine M E; Schückel, Julia; Pedersen, Kristine S K; Kračun, Stjepan K; Busk, Peter K; Harholt, Jesper; Sapountzis, Panagiotis; Lange, Lene; Aanen, Duur K; Poulsen, Michael

    2018-03-01

    Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. Copyright © 2018 da Costa et al.

  11. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  12. The reactions of thiophene on Mo(110) and Mo(110)-p(2×2)-S

    NASA Astrophysics Data System (ADS)

    Roberts, Jeffrey T.; Friend, C. M.

    1987-07-01

    The reactions of thiophene and 2,5-dideuterothiophene on Mo(110) and Mo(110)-p(2×2)-S have been investigated under ultrahigh vacuum conditions using temperature programmed reaction spectroscopy and Auger electron spectroscopy. Thiophene chemisorbed on Mo(110) decomposes during temperature programmed reaction to yield only gaseous dihydrogen, surface carbon, and surface sulfur. At low thiophene exposures, dihydrogen evolves from Mo(110) in a symmetric peak at 440 K. At saturation exposures, three dihydrogen peaks are detected at 360 K, at 420 K and at 565 K. Multilayers of thiophene desorb at 180 K. Temperature programmed reaction of 2,5-dideuterothiophene demonstrates that at high thiophene coverages, one of the α-C-H bonds (those nearest sulfur) breaks first. No bond breaking selectivity is observed at low thiophene exposures. The Mo(110)-p(2×2)-S surface is less active for thiophene decomposition. Thiophene adsorbed on Mo(110)-p(2×2)-S to low coverages decomposes to surface carbon surface sulfur, and hydrogen at 430 K. At reaction saturation, dihydrogen production is observed at 375 and 570 K. In addition, at moderate and high exposures, chemisorbed thiophene desorbs from Mo(110)-p(2×2)-S. At saturation the desorption temperature of the reversibly chemisorbed state is 215 K. Experiments with 2,5-dideuterothiophene demonstrate no surface selectivity for α-C-H bond breaking reactions on Mo(110)-p(2×2)-S. The decomposition mechanism and energetics of thiophene decomposition are proposed to be dependent on the coverage of thiophene. At low thiophene exposures, the ring is proposed to bond parallel to the surface. All C-H bonds in the parallel geometry are sterically available for activation by the surface, accounting for the lack of selectivity in C-H bond breaking. High thiophene coverages are suggested to result in perpendicularly bound thiophene which undergoes selective α-dehydrogenation to an α)-thiophenyl intermediate. The presence of sulfur leads to a high energy pathway for cleavage of C-H bonds in a thiophene derived intermediate. Carbon-hydrogen bonds survive on the surface up to temperatures of 650 K. Comparison of this study with work on Mo(100) demonstrates that the reaction of thiophene on molybdenum is relatively insensitive to the surface geometric structure.

  13. Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David

    1989-01-01

    Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  14. Reconfigurable Model Execution in the OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Hwang, John T.

    2017-01-01

    NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.

  15. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  16. Adaptive neuro-heuristic hybrid model for fruit peel defects detection.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2018-02-01

    Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Challenges of including nitrogen effects on decomposition in earth system models

    NASA Astrophysics Data System (ADS)

    Hobbie, S. E.

    2011-12-01

    Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.

  18. Decomposing potassium peroxychromate produces hydroxyl radical (.OH) that can peroxidize the unsaturated fatty acids of phospholipid dispersions.

    PubMed

    Edwards, J C; Quinn, P J

    1982-09-01

    The unsaturated fatty acyl residues of egg yolk lecithin are selectively removed when bilayer dispersions of the lipid are exposed to decomposing peroxychromate at pH 7.6 or pH 9.0. Mannitol (50 mM or 100 mM)partially prevents the oxidation of the phospholipid due to decomposing peroxychromate at pH 7.6 and the amount of lipid lost is inversely proportional to the concentration of mannitol. N,N-Dimethyl-p-nitrosoaniline, mixed with the lipid in a molar ratio of 1.3:1, completely prevents the oxidation of lipid due to decomposing peroxychromate at pH 9.0, but some linoleic acid is lost if the incubation is done at pH 7.6. If the concentration of this quench reagent is reduced tenfold, oxidation of linoleic acid by decomposing peroxychromate at pH 9.0 is observed. Hydrogen peroxide is capable of oxidizing the unsaturated fatty acids of lecithin dispersions. Catalase or boiled catalase (2 mg/ml) protects the lipid from oxidation due to decomposing peroxychromate at pH 7.6 to approximately the same extent, but their protective effect is believed to be due to the non-specific removal of .OH. It is concluded that .OH is the species responsible for the lipid oxidation caused by decomposing peroxychromate. This is consistent with the observed bleaching of N,N-dimethyl-p-nitrosoanaline and the formation of a characteristic paramagnetic .OH adduct of the spin trap, 5,5-dimethylpyrroline-1-oxide.

  19. Performance characterization of water recovery and water quality from chemical/organic waste products

    NASA Technical Reports Server (NTRS)

    Moses, W. M.; Rogers, T. D.; Chowdhury, H.; Cullingford, H. S.

    1989-01-01

    The water reclamation subsystems currently being evaluated for the Space Shuttle Freedom are briefly reviewed with emphasis on a waste water management system capable of processing wastes containing high concentrations of organic/inorganic materials. The process combines low temperature/pressure to vaporize water with high temperature catalytic oxidation to decompose volatile organics. The reclaimed water is of potable quality and has high potential for maintenance under sterile conditions. Results from preliminary experiments and modifications in process and equipment required to control reliability and repeatability of system operation are presented.

  20. Development of the silane process for the production of low-cost polysilicon

    NASA Technical Reports Server (NTRS)

    Iya, S. K.

    1986-01-01

    It was recognized that the traditional hot rod type deposition process for decomposing silane is energy intensive, and a different approach for converting silane to silicon was chosen. A 1200 metric tons/year capacity commercial plant was constructed in Moses Lake, Washington. A fluidized bed processor was chosen as the most promising technology and several encouraging test runs were conducted. This technology continues to be very promising in producing low cost polysilicon. The Union Carbide silane process and the research development on the fluidized bed silane decomposition are discussed.

  1. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  2. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  3. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  4. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  5. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  6. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  7. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  8. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  9. A domain-decomposed multi-model plasma simulation of collisionless magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Datta, I. A. M.; Shumlak, U.; Ho, A.; Miller, S. T.

    2017-10-01

    Collisionless magnetic reconnection is a process relevant to many areas of plasma physics in which energy stored in magnetic fields within highly conductive plasmas is rapidly converted into kinetic and thermal energy. Both in natural phenomena such as solar flares and terrestrial aurora as well as in magnetic confinement fusion experiments, the reconnection process is observed on timescales much shorter than those predicted by a resistive MHD model. As a result, this topic is an active area of research in which plasma models with varying fidelity have been tested in order to understand the proper physics explaining the reconnection process. In this research, a hybrid multi-model simulation employing the Hall-MHD and two-fluid plasma models on a decomposed domain is used to study this problem. The simulation is set up using the WARPXM code developed at the University of Washington, which uses a discontinuous Galerkin Runge-Kutta finite element algorithm and implements boundary conditions between models in the domain to couple their variable sets. The goal of the current work is to determine the parameter regimes most appropriate for each model to maintain sufficient physical fidelity over the whole domain while minimizing computational expense. This work is supported by a Grant from US AFOSR.

  10. Spatially parallel processing of within-dimension conjunctions.

    PubMed

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  11. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Speculations on the nature of cellulose pyrolysis

    Treesearch

    F.J. Kilzer; A. Broido

    1965-01-01

    Consideration of the available data on cellulose pyrolysis suggests that, with relative importance depending upon heating rate in the temperature range 200-400°C, very pure cellulose decomposes by two competitive endothermic processes. lt is postulated that an unzipping reaction produces 1,4-anhydro-α-D-glucopyranose which rearranges to give levoglucosan. The other...

  13. Thermal Decomposition Of Hydroxylamine Nitrate

    NASA Astrophysics Data System (ADS)

    Oxley, Jimmie C.; Brower, Kay R.

    1988-05-01

    used hydroxylamine nitrate decomposes within a few minutes in the temperature range 130-140°C. Added ammonium ion is converted to N2, while hydrazinium ion is converted to HN3. Nitrous acid is an intermediate and its formation is rate-determining. A hygride transfer process is postulated. The reaction pathways have been elucidated by use of N tracers.

  14. Computational Modeling of Morphological Effects in Bangla Visual Word Recognition

    ERIC Educational Resources Information Center

    Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam

    2015-01-01

    In this paper we aim to model the organization and processing of Bangla polymorphemic words in the mental lexicon. Our objective is to determine whether the mental lexicon accesses a polymorphemic word as a whole or decomposes the word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two…

  15. Accelerated Fast Spin-Echo Magnetic Resonance Imaging of the Heart Using a Self-Calibrated Split-Echo Approach

    PubMed Central

    Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf

    2014-01-01

    Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341

  16. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  17. Development Program of IS Process Pilot Test Plant for Hydrogen Production With High-Temperature Gas-Cooled Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Iwatsuki; Atsuhiko Terada; Hiroyuki Noguchi

    2006-07-01

    At the present time, we are alarmed by depletion of fossil energy and effects on global environment such as acid rain and global warming, because our lives depend still heavily on fossil energy. So, it is universally recognized that hydrogen is one of the best energy media and its demand will be increased greatly in the near future. In Japan, the Basic Plan for Energy Supply and Demand based on the Basic Law on Energy Policy Making was decided upon by the Cabinet on 6 October, 2003. In the plan, efforts for hydrogen energy utilization were expressed as follows; hydrogenmore » is a clean energy carrier without carbon dioxide (CO{sub 2}) emission, and commercialization of hydrogen production system using nuclear, solar and biomass, not fossil fuels, is desired. However, it is necessary to develop suitable technology to produce hydrogen without CO{sub 2} emission from a view point of global environmental protection, since little hydrogen exists naturally. Hydrogen production from water using nuclear energy, especially the high-temperature gas-cooled reactor (HTGR), is one of the most attractive solutions for the environmental issue, because HTGR hydrogen production by water splitting methods such as a thermochemical iodine-sulfur (IS) process has a high possibility to produce hydrogen effectively and economically. The Japan Atomic Energy Agency (JAEA) has been conducting the HTTR (High-Temperature Engineering Test Reactor) project from the view to establishing technology base on HTGR and also on the IS process. In the IS process, raw material, water, is to be reacted with iodine (I{sub 2}) and sulfur dioxide (SO{sub 2}) to produce hydrogen iodide (HI) and sulfuric acid (H{sub 2}SO{sub 4}), the so-called Bunsen reaction, which are then decomposed endo-thermically to produce hydrogen (H{sub 2}) and oxygen (O{sub 2}), respectively. Iodine and sulfur dioxide produced in the decomposition reactions can be used again as the reactants in the Bunsen reaction. In JAEA, continuous hydrogen production was demonstrated with the hydrogen production rate of about 30 NL/hr for one week using a bench-scale test apparatus made of glass. Based on the test results and know-how obtained through the bench-scale tests, a pilot test plant that can produce hydrogen of about 30 Nm{sup 3}/hr is being designed. The test plant will be fabricated with industrial materials such as glass coated steel, SiC ceramics etc, and operated under high pressure condition up to 2 MPa. The test plant will consist of a IS process plant and a helium gas (He) circulation facility (He loop). The He loop can simulate HTTR operation conditions, which consists of a 400 kW-electric heater for He hating, a He circulator and a steam generator working as a He cooler. In parallel to the design study, key components of the IS process such as the sulfuric acid (H{sub 2}SO{sub 4}) and the sulfur trioxide (SO{sub 3}) decomposers working under-high temperature corrosive environments have been designed and test-fabricated to confirm their fabricability. Also, other R and D's are under way such as corrosion, processing of HIx solutions. This paper describes present status of these activities. (authors)« less

  18. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  19. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  20. Decomposition of S-nitrosocysteine via S- to N-transnitrosation

    PubMed Central

    Peterson, Lisa A.; Wagener, Tanja; Sies, Helmut; Stahl, Wilhelm

    2008-01-01

    S-Nitrosothiols are thought to be important intermediates in nitric oxide signaling pathways. These compounds are unstable, in part, through their ability to donate NO. One model S-nitrosothiol, S-nitrosocysteine is particularly unstable. Recently, it was proposed that this compound decomposed via intra- and intermolecular transfer of the NO group from the sulfur to the nitrogen to form N-nitrosocysteine. This primary nitrosamine is expected to rapidly rearrange to ultimately form a reactive diazonium ion intermediate. To test this hypothesis, we demonstrated that thiirane-2-carboxylic acid is formed during the decomposition of S-nitrosocysteine at neutral pH. Acrylic acid was another product of this reaction. These results indicate that a small but significant amount of S-nitrosocysteine decomposes via S- to N-transnitrosation. The formation of a reactive intermediate in this process indicates the potential for this reaction to contribute to the toxicological properties of nitric oxide. PMID:17439249

  1. Geological controls on soil parent material geochemistry along a northern Manitoba-North Dakota transect

    USGS Publications Warehouse

    Klassen, R.A.

    2009-01-01

    As a pilot study for mapping the geochemistry of North American soils, samples were collected along two continental transects extending east–west from Virginia to California, and north–south from northern Manitoba to the US–Mexican border and subjected to geochemical and mineralogical analyses. For the northern Manitoba–North Dakota segment of the north–south transect, X-ray diffraction analysis and bivariate relations indicate that geochemical properties of soil parent materials may be interpreted in terms of minerals derived from Shield and clastic sedimentary bedrock, and carbonate sedimentary bedrock terranes. The elements Cu, Zn, Ni, Cr and Ti occur primarily in silicate minerals decomposed by aqua regia, likely phyllosilicates, that preferentially concentrate in clay-sized fractions; Cr and Ti also occur in minerals decomposed only by stronger acid. Physical glacial processes affecting the distribution and concentration of carbonate minerals are significant controls on the variation of trace metal background concentrations.

  2. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  3. Idealised large-eddy-simulation of thermally driven flows over an isolated mountain range with multiple ridges

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Gohm, Alexander; Wagner, Johannes S.; Leukauf, Daniel; Posch, Christian

    2014-05-01

    Two dimensional idealised large-eddy-simulations are performed using the WRF model to investigate thermally driven flows during the daytime over complex terrain. Both the upslope flows and the temporal evolution of the boundary layer structure are studied with a constant surface heat flux forcing of 150 W m-2. In order to distinguish between different heating processes the flow is Reynold decomposed into its mean and turbulent part. The heating processes associated with the mean flow are a cooling through cold-air advection along the slopes and subsidence warming within the valleys. The turbulent component causes bottom-up heating near the ground leading to a convective boundary layer (CBL) inside the valleys. Overshooting potentially colder thermals cool the stably stratified valley atmosphere above the CBL. Compared to recent investigations (Schmidli 2013, J. Atmos. Sci., Vol. 70, No. 12: pp. 4041-4066; Wagner et al. 2014, manuscript submitted to Mon. Wea. Rev.), which used an idealised topography with two parallel mountain crests separated by a straight valley, this project focuses on multiple, periodic ridges and valleys within an isolated mountain range. The impact of different numbers of ridges on the flow structure is compared with the sinusoidal envelope-topography. The present simulations show an interaction between the smaller-scale upslope winds within the different valleys and the large-scale flow of the superimposed mountain-plain wind circulation. Despite a smaller boundary layer air volume in the envelope case compared to the multiple ridges case the volume averaged heating rates are comparable. The reason is a stronger advection-induced cooling along the slopes and a weaker warming through subsidence at the envelope-topography compared to the mountain range with multiple ridges.

  4. On the Optimality of Serial and Parallel Processing in the Psychological Refractory Period Paradigm: Effects of the Distribution of Stimulus Onset Asynchronies

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf; Rolke, Bettina

    2009-01-01

    Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…

  5. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketusky, E.; Subramanian, K.

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less

  6. Gas Sensitivity and Sensing Mechanism Studies on Au-Doped TiO2 Nanotube Arrays for Detecting SF6 Decomposed Components

    PubMed Central

    Zhang, Xiaoxing; Yu, Lei; Tie, Jing; Dong, Xingchen

    2014-01-01

    The analysis to SF6 decomposed component gases is an efficient diagnostic approach to detect the partial discharge in gas-insulated switchgear (GIS) for the purpose of accessing the operating state of power equipment. This paper applied the Au-doped TiO2 nanotube array sensor (Au-TiO2 NTAs) to detect SF6 decomposed components. The electrochemical constant potential method was adopted in the Au-TiO2 NTAs' fabrication, and a series of experiments were conducted to test the characteristic SF6 decomposed gases for a thorough investigation of sensing performances. The sensing characteristic curves of intrinsic and Au-doped TiO2 NTAs were compared to study the mechanism of the gas sensing response. The results indicated that the doped Au could change the TiO2 nanotube arrays' performances of gas sensing selectivity in SF6 decomposed components, as well as reducing the working temperature of TiO2 NTAs. PMID:25330053

  7. The role of parallelism in the real-time processing of anaphora.

    PubMed

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  8. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  9. An experimental study of postmortem decomposition of methomyl in blood.

    PubMed

    Kawakami, Yuka; Fuke, Chiaki; Fukasawa, Maki; Ninomiya, Kenji; Ihama, Yoko; Miyazaki, Tetsuji

    2017-03-01

    Methomyl (S-methyl-1-N-[(methylcarbamoyl)oxy]thioacetimidate) is a carbamate pesticide. It has been noted that in some cases of methomyl poisoning, methomyl is either not detected or detected only in low concentrations in the blood of the victims. However, in such cases, methomyl is detected at higher concentrations in the vitreous humor than in the blood. This indicates that methomyl in the blood is possibly decomposed after death. However, the reasons for this phenomenon have been unclear. We have previously reported that methomyl is decomposed to dimethyl disulfide (DMDS) in the livers and kidneys of pigs but not in their blood. In addition, in the field of forensic toxicology, it is known that some compounds are decomposed or produced by internal bacteria in biological samples after death. This indicates that there is a possibility that methomyl in blood may be decomposed by bacteria after death. The aim of this study was therefore to investigate whether methomyl in blood is decomposed by bacteria isolated from human stool. Our findings demonstrated that methomyl was decomposed in human stool homogenates, resulting in the generation of DMDS. In addition, it was observed that three bacterial species isolated from the stool homogenates, Bacillus cereus, Pseudomonas aeruginosa, and Bacillus sp., showed methomyl-decomposing activity. The results therefore indicated that one reason for the difficulty in detecting methomyl in postmortem blood from methomyl-poisoning victims is the decomposition of methomyl by internal bacteria such as B. cereus, P. aeruginosa, and Bacillus sp. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Algorithms and programming tools for image processing on the MPP

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1985-01-01

    Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.

  11. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  12. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  13. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  14. The Design, Development and Testing of Complex Avionics Systems: Conference Proceedings Held at the Avionics Panel Symposium in Las Vegas, Nevada on 27 April-1 May 1987

    DTIC Science & Technology

    1987-12-01

    Normally, the system is decomposed into manageable parts with accurately defined interfaces. By rigidly controlling this process, aerospace companies have...Reference A CHANGE IN SYSTEM DESIGN EMPHASIS: FROM MACHINE TO MAN by M.L.Metersky and J.L.Ryder 16 SESSION I1 - MANAGING THE FUl URE SYSTEM DESIGN...PROCESS MANAGING ADVANCED AVIONIC SYSTEM DESIGN by P.Simons 17 ERGONOMIE PSYCHOSENSORIELLE DES COCKPITS, INTERET DES SYSTEMES INFORMATIQUES INTELLIGENTS

  15. Program Helps Decompose Complicated Design Problems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1993-01-01

    Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.

  16. Northeast Artificial Intelligence Consortium (NAIC). Volume 15. Strategies for Coupling Symbolic and Numerical Computation in Knowledge Base Systems

    DTIC Science & Technology

    1990-12-01

    Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an

  17. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  18. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  19. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  20. Overview of reductants utilized in nuclear fuel reprocessing/recycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patricia Paviet-Hartmann; Catherine Riddle; Keri Campbell

    2013-10-01

    Most of the aqueous processes developed, or under consideration worldwide for the recycling of used nuclear fuel (UNF) utilize the oxido-reduction properties of actinides to separate them from other radionuclides. Generally, after acid dissolution of the UNF, (essentially in nitric acid solution), actinides are separated from the raffinate by liquid-liquid extraction using specific solvents, associated along the process, with a particular reductant that will allow the separation to occur. For example, the industrial PUREX process utilizes hydroxylamine as a plutonium reductant. Hydroxylamine has numerous advantages: not only does it have the proper attributes to reduce Pu(IV) to Pu(III), but itmore » is also a non-metallic chemical that is readily decomposed to innocuous products by heating. However, it has been observed that the presence of high nitric acid concentrations or impurities (such as metal ions) in hydroxylamine solutions increase the likelihood of the initiation of an autocatalytic reaction. Recently there has been some interest in the application of simple hydrophilic hydroxamic ligands such as acetohydroxamic acid (AHA) for the stripping of tetravalent actinides in the UREX process flowsheet. This approach is based on the high coordinating ability of hydroxamic acids with tetravalent actinides (Np and Pu) compared with hexavalent uranium. Thus, the use of AHA offers a route for controlling neptunium and plutonium in the UREX process by complexant based stripping of Np(IV) and Pu(IV) from the TBP solvent phase, while U(VI) ions are not affected by AHA and remain solvated in the TBP phase. In the European GANEX process, AHA is also used to form hydrophilic complexes with actinides and strip them from the organic phase into nitric acid. However, AHA does not decompose completely when treated with nitric acid and hampers nitric acid recycling. In lieu of using AHA in the UREX + process, formohydroxamic acid (FHA), although not commercially available, hold promises as a replacement for AHA. FHA undergoes hydrolysis to formic acid which is volatile, thus allowing the recycling of nitric acid. Unfortunately, FHA powder was not stable in the experiments we ran in our laboratory. In addition, AHA and FHA also decompose to hydroxylamine which may undergo an autocatalytic reaction. Other reductants are available and could be extremely useful for actinides separation. The review presents the current plutonium reductants used in used nuclear fuel reprocessing and will introduce innovative and novel reductants that could become reducers for future research on UNF separation.« less

  1. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  2. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  3. Root structure-function relationships in 74 species: evidence of a root economics spectrum related to carbon economy.

    PubMed

    Roumet, Catherine; Birouste, Marine; Picon-Cochard, Catherine; Ghestem, Murielle; Osman, Normaniza; Vrignon-Brenas, Sylvain; Cao, Kun-Fang; Stokes, Alexia

    2016-05-01

    Although fine roots are important components of the global carbon cycle, there is limited understanding of root structure-function relationships among species. We determined whether root respiration rate and decomposability, two key processes driving carbon cycling but always studied separately, varied with root morphological and chemical traits, in a coordinated way that would demonstrate the existence of a root economics spectrum (RES). Twelve traits were measured on fine roots (diameter ≤ 2 mm) of 74 species (31 graminoids and 43 herbaceous and dwarf shrub eudicots) collected in three biomes. The findings of this study support the existence of a RES representing an axis of trait variation in which root respiration was positively correlated to nitrogen concentration and specific root length and negatively correlated to the root dry matter content, lignin : nitrogen ratio and the remaining mass after decomposition. This pattern of traits was highly consistent within graminoids but less consistent within eudicots, as a result of an uncoupling between decomposability and morphology, and of heterogeneity of individual roots of eudicots within the fine-root pool. The positive relationship found between root respiration and decomposability is essential for a better understanding of vegetation-soil feedbacks and for improving terrestrial biosphere models predicting the consequences of plant community changes for carbon cycling. © 2016 CNRS. New Phytologist © 2016 New Phytologist Trust.

  4. Complete Decomposition of Li 2 CO 3 in Li–O 2 Batteries Using Ir/B 4 C as Noncarbon-Based Oxygen Electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shidong; Xu, Wu; Zheng, Jianming

    Incomplete decomposition of Li2CO3 during charge process is a critical barrier for rechargeable Li-O2 batteries. Here we report complete decomposition of Li2CO3 in Li-O2 batteries using ultrafine iridium-decorated boron carbide (Ir/B4C) nanocomposite as oxygen electrode. The systematic investigation on charging the Li2CO3 preloaded Ir/B4C electrode in an ether-based electrolyte demonstrates that Ir/B4C electrode can decompose Li2CO3 with an efficiency close to 100% at below 4.37 V. In contrast, the bare B4C without Ir electrocatalyst can only decompose 4.7% of preloaded Li2CO3. The reaction mechanism of Li2CO3 decomposition in the presence of Ir/B4C electrocatalyst has been further investigated. A Li-O2 batterymore » using Ir/B4C as oxygen electrode material shows highly enhanced cycling stability than that using bare B4C oxygen electrode. These results clearly demonstrate that Ir/B4C is an effecitive oxygen electrode amterial to completely decompose Li2CO3 at relatively low charge voltages and is of significant importance in improving the cycle performanc of aprotic Li-O2 batteries.« less

  5. Photodecomposition of volatile organic compounds using TiO2 nanoparticles.

    PubMed

    Jwo, Ching-Song; Chang, Ho; Kao, Mu-Jnug; Lin, Chi-Hsiang

    2007-06-01

    This study examined the photodecomposition of volatile organic compounds (VOCs) using TiO2 catalyst fabricated by the Submerged Arc Nanoparticle Synthesis System (SANSS). TiO2 catalyst was employed to decompose volatile organic compounds and compare with Degussa-P25 TiO2 in terms of decomposition efficiency. In the electric discharge manufacturing process, a Ti bar, applied as the electrode, was melted and vaporized under high temperature. The vaporized Ti powders were then rapidly quenched under low-temperature and low-pressure conditions in deionized water, thus nucleating and forming nanocrystalline powders uniformly dispersed in the base solvent. The average diameter of the TiO2 nanoparticles was 20 nm. X-ray diffraction analysis confirmed that the nanoparticles in the deionized water were Anatase type TiO2. It was found that gaseous toluene exposed to UV irradiation produced intermediates that were even harder to decompose. After 60-min photocomposition, Degussa-P25 TiO2 reduced the concentration of gaseous toluene to 8.18% while the concentration after decomposition by SANSS TiO2 catalyst dropped to 0.35%. Under UV irradiation at 253.7 +/- 184.9 nm, TiO2 prepared by SANSS can produce strong chemical debonding energy, thus showing great efficiency, superior to that of Degussa-P25 TiO2, in decomposing gaseous toluene and its intermediates.

  6. Parallel and serial grouping of image elements in visual perception.

    PubMed

    Houtkamp, Roos; Roelfsema, Pieter R

    2010-12-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.

  7. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  8. Evolution of water repellency of organic growing media used in Horticulture and consequences on hysteretic behaviours of the water retention curve

    NASA Astrophysics Data System (ADS)

    Michel, Jean-Charles; Qi, Guifang; Charpentier, Sylvain; Boivin, Pascal

    2010-05-01

    Most of growing media used in horticulture (particularly peat substrates) shows hysteresis phenomena during desiccation and rehydration cycles, which greatly affects their hydraulic properties. The origins of these properties have often been related to one or several of the specific mechanisms such as the non-geometrical uniformity of the pores (also called ‘ink bottle' effect), presence of trapped air, shrinkage-swelling phenomena, and changes in water repellency. However, recent results showed that changes in wettability during desiccation and rehydration could be considered as one of the main factors leading to hysteretic behaviour in these materials with high organic matter contents (Naasz et al., 2008). The general objective was to estimate the evolutions of changes in water repellency on the water retention properties and associated hysteresis phenomena in relation to the intensity and the number of drying/wetting cycles. For this, simultaneous shrinkage/swelling and water retention curves were obtained using method previously developed for soil shrinkage analysis by Boivin (2006) that we have adapted for growing media and to their physical behaviours during rewetting. The experiment was performed in a climatic chamber at 20°C. A cylinder with the growing medium tested was placed on a porous ceramic disk which is used to control the pressure and to full/empty water of the sample. The whole of the device was then placed on a balance to record the water loss/storage with time; whereas linear displacement transducers were used to measure the changes in sample height and diameter upon drying and wetting in the axial and radial directions. Ceramic cups (2 cm long and 0.21 cm diameter) connected to pressure transducers were inserted in the middle of the samples to record the water pressure head. In parallell, contact angles were measured by direct droplet method at different steps during the drying/rewetting cycles. First results obtained on weakly decomposed peat samples with or without surfactants showed isotropic shrinkage and swelling, and highlighted hysteresis phenomena in relation to the intensity of drying/wetting cycle. Contact angle measurements are in progress. Other measurements on highly decomposed peat (more repellent than weakly decomposed), composted pine bark (without volume change during dryin/wetting cycles), and coco fiber (expected as non repellent organic growing media) are also in progress.

  9. Into the decomposed body-forensic digital autopsy using multislice-computed tomography.

    PubMed

    Thali, M J; Yen, K; Schweitzer, W; Vock, P; Ozdoba, C; Dirnhofer, R

    2003-07-08

    It is impossible to obtain a representative anatomical documentation of an entire body using classical X-ray methods, they subsume three-dimensional bodies into a two-dimensional level. We used the novel multislice-computed tomography (MSCT) technique in order to evaluate a case of homicide with putrefaction of the corpse before performing a classical forensic autopsy. This non-invasive method showed gaseous distension of the decomposing organs and tissues in detail as well as a complex fracture of the calvarium. MSCT also proved useful in screening for foreign matter in decomposing bodies, and full-body scanning took only a few minutes. In conclusion, we believe postmortem MSCT imaging is an excellent vizualisation tool with great potential for forensic documentation and evaluation of decomposed bodies.

  10. Interdisciplinary Research and Phenomenology as Parallel Processes of Consciousness

    ERIC Educational Resources Information Center

    Arvidson, P. Sven

    2016-01-01

    There are significant parallels between interdisciplinarity and phenomenology. Interdisciplinary conscious processes involve identifying relevant disciplines, evaluating each disciplinary insight, and creating common ground. In an analogous way, phenomenology involves conscious processes of epoché, reduction, and eidetic variation. Each stresses…

  11. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  12. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  13. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  14. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  15. The Components of Working Memory Updating: An Experimental Decomposition and Individual Differences

    ERIC Educational Resources Information Center

    Ecker, Ullrich K. H.; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E. H.

    2010-01-01

    Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major…

  16. Acid sorption regeneration process using carbon dioxide

    DOEpatents

    King, C. Judson; Husson, Scott M.

    2001-01-01

    Carboxylic acids are sorbed from aqueous feedstocks onto a solid adsorbent in the presence of carbon dioxide under pressure. The acids are freed from the sorbent phase by a suitable regeneration method, one of which is treating them with an organic alkylamine solution thus forming an alkylamine-carboxylic acid complex which thermally decomposes to the desired carboxylic acid and the alkylamine.

  17. A Comparison of Reinforcement Learning Models for the Iowa Gambling Task Using Parameter Space Partitioning

    ERIC Educational Resources Information Center

    Steingroever, Helen; Wetzels, Ruud; Wagenmakers, Eric-Jan

    2013-01-01

    The Iowa gambling task (IGT) is one of the most popular tasks used to study decision-making deficits in clinical populations. In order to decompose performance on the IGT in its constituent psychological processes, several cognitive models have been proposed (e.g., the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models). Here we…

  18. Ferroic Materials: Design, Preparation and Characteristics. Ceramic Transactions. Volume 43. Proceedings of International Symposium Held in Honolulu, Hawaii on November 7-10, 1993.

    DTIC Science & Technology

    1993-11-10

    realized. Metal carboxylates are often used as precursors for ceramic oxides since they tend to be air-stable, soluble in organic solvents, and decompose...metalorganic precursors [9] . These include routes based solely on metal alkoxides [9, 101 or metal carboxylates (e.g. the Pechini (or citrate) process

  19. Acquisition of HPLC-Mass Spectrometer

    DTIC Science & Technology

    2015-08-18

    phenyl alanine. This dithiol is coordinated to the iron and all attempts to decompose the ionic coordination complex 56 to recover strictly the...sulfonation process of an asymmetric deprotonation providing a lithium complex with sparteine. This reaction scheme will also direct stereochemistry of...currently used in ointments for treatment of pain and inflammation. Capsaicin shows promise as an effective anti-cancer nutritional agent and

  20. Molybdenum enhanced low-temperature deposition of crystalline silicon nitride

    DOEpatents

    Lowden, Richard A.

    1994-01-01

    A process for chemical vapor deposition of crystalline silicon nitride which comprises the steps of: introducing a mixture of a silicon source, a molybdenum source, a nitrogen source, and a hydrogen source into a vessel containing a suitable substrate; and thermally decomposing the mixture to deposit onto the substrate a coating comprising crystalline silicon nitride containing a dispersion of molybdenum silicide.

  1. In situ catalytic hydrogenation of model compounds and biomass-derived phenolic compounds for bio-oil upgrading

    Treesearch

    Junfeng Feng; Zhongzhi Yang; Chung-yun Hse; Qiuli Su; Kui Wang; Jianchun Jiang; Junming Xu

    2017-01-01

    The renewable phenolic compounds produced by directional liquefaction of biomass are a mixture of complete fragments decomposed from native lignin. These compounds are unstable and difficult to use directly as biofuel. Here, we report an efficient in situ catalytic hydrogenation method that can convert phenolic compounds into saturated cyclohexanes. The process has...

  2. Comprehensive evaluation of liver resection procedures: surgical mind development through cognitive task analysis.

    PubMed

    Ho, Cheng-Maw; Wakabayashi, Go; Yeh, Chi-Chuan; Hu, Rey-Heng; Sakaguchi, Takanori; Hasegawa, Yasushi; Takahara, Takeshi; Nitta, Hiroyuki; Sasaki, Akira; Lee, Po-Huang

    2018-01-01

    Liver resection is a complex procedure for trainee surgeons. Cognitive task analysis (CTA) facilitates understanding and decomposing tasks that require a great proportion of mental activity from experts. Using CTA and video-based coaching to compare liver resection by open and laparoscopic approaches, we decomposed the task of liver resection into exposure (visual field building), adequate tension made at the working plane (which may change three-dimensionally during the resection process), and target processing (intervention strategy) that can bridge the gap from the basic surgical principle. The key steps of highly-specialized techniques, including hanging maneuvers and looping of extra-hepatic hepatic veins, were shown on video by open and laparoscopic approaches. Familiarization with laparoscopic anatomical orientation may help surgeons already skilled at open liver resection transit to perform laparoscopic liver resection smoothly. Facilities at hand (such as patient tolerability, advanced instruments, and trained teams of personnel) can influence surgical decision making. Application of the rationale and realizing the interplay between the surgical principles and the other paramedical factors may help surgeons in training to understand the mental abstractions of experienced surgeons, to choose the most appropriate surgical strategy effectively at will, and to minimize the gap.

  3. Effect of surface oxide films on the properties of pulse electric-current sintered metal powders

    NASA Astrophysics Data System (ADS)

    Xie, Guoqiang; Ohashi, Osamu; Yamaguchi, Norio; Wang, Airu

    2003-11-01

    Metallic powders with various thermodynamic stability oxide films (Ag, Cu, and Al powders) were sintered using a pulse electric-current sintering (PECS) process. Behavior of oxide films at powder surfaces and their effect on the sintering properties were investigated. The results showed that the sintering properties of metallic powders in the PECS process were subject to the thermodynamic stability of oxide films at particles surfaces. The oxide films at Ag powder surfaces are decomposed during sintering with the contact region between the particles being metal/metal bond. The oxide films at Cu powder surfaces are mainly broken via loading pressure at a low sintering temperature. At a high sintering temperature, they are mainly dissolved in the parent metal, and the contact regions turn into the direct metal/metal bonding. Excellent sintering properties can be received. The oxide films at Al powder surfaces are very stable, and cannot be decomposed and dissolved, but broken by plastic deformation of particles under loading pressure at experimental temperatures. The interface between particles is partially bonded via the direct metal/metal bonding making it difficult to achieve good sintered properties.

  4. Fungal colonization and decomposition of leaves and stems of Salix arctica on deglaciated moraines in high-Arctic Canada

    NASA Astrophysics Data System (ADS)

    Osono, Takashi; Matsuoka, Shunsuke; Hirose, Dai; Uchida, Masaki; Kanda, Hiroshi

    2014-06-01

    Fungal colonization, succession, and decomposition of leaves and stems of Salix arctica were studied to estimate the roles of fungi in the decomposition processes in the high Arctic. The samples were collected from five moraines with different periods of development since deglaciation to investigate the effects of ecosystem development on the decomposition processes during the primary succession. The total hyphal lengths and the length of darkly pigmented hyphae increased during decomposition of leaves and stems and were not varied with the moraines. Four fungal morphotaxa were frequently isolated from both leaves and stems. The frequencies of occurrence of two morphotaxa varied with the decay class of leaves and/or stems. The hyphal lengths and the frequencies of occurrence of fungal morphotaxa were positively or negatively correlated with the contents of organic chemical components and nutrients in leaves and stems, suggesting the roles of fungi in chemical changes in the field. Pure culture decomposition tests demonstrated that the fungal morphotaxa were cellulose decomposers. Our results suggest that fungi took part in the chemical changes in decomposing leaves and stems even under the harsh environment of the high Arctic.

  5. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  6. Quantitative phase imaging of biological cells and tissues using singleshot white light interference microscopy and phase subtraction method for extended range of measurement

    NASA Astrophysics Data System (ADS)

    Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem

    2016-03-01

    We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.

  7. Systematic approach to in-depth understanding of photoelectrocatalytic bacterial inactivation mechanisms by tracking the decomposed building blocks.

    PubMed

    Sun, Hongwei; Li, Guiying; Nie, Xin; Shi, Huixian; Wong, Po-Keung; Zhao, Huijun; An, Taicheng

    2014-08-19

    A systematic approach was developed to understand, in-depth, the mechanisms involved during the inactivation of bacterial cells using photoelectrocatalytic (PEC) processes with Escherichia coli K-12 as the model microorganism. The bacterial cells were found to be inactivated and decomposed primarily due to attack from photogenerated H2O2. Extracellular reactive oxygen species (ROSs), such as H2O2, may penetrate into the bacterial cell and cause dramatically elevated intracellular ROSs levels, which would overwhelm the antioxidative capacity of bacterial protective enzymes such as superoxide dismutase and catalase. The activities of these two enzymes were found to decrease due to the ROSs attacks during PEC inactivation. Bacterial cell wall damage was then observed, including loss of cell membrane integrity and increased permeability, followed by the decomposition of cell envelope (demonstrated by scanning electronic microscope images). One of the bacterial building blocks, protein, was found to be oxidatively damaged due to the ROSs attacks, as well. Leakage of cytoplasm and biomolecules (bacterial building blocks such as proteins and nucleic acids) were evident during prolonged PEC inactivation process. The leaked cytoplasmic substances and cell debris could be further degraded and, ultimately, mineralized with prolonged PEC treatment.

  8. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  9. Enhanced solvent production by metabolic engineering of a twin-clostridial consortium.

    PubMed

    Wen, Zhiqiang; Minton, Nigel P; Zhang, Ying; Li, Qi; Liu, Jinle; Jiang, Yu; Yang, Sheng

    2017-01-01

    The efficient fermentative production of solvents (acetone, n-butanol, and ethanol) from a lignocellulosic feedstock using a single process microorganism has yet to be demonstrated. Herein, we developed a consolidated bioprocessing (CBP) based on a twin-clostridial consortium composed of Clostridium cellulovorans and Clostridium beijerinckii capable of producing cellulosic butanol from alkali-extracted, deshelled corn cobs (AECC). To accomplish this a genetic system was developed for C. cellulovorans and used to knock out the genes encoding acetate kinase (Clocel_1892) and lactate dehydrogenase (Clocel_1533), and to overexpress the gene encoding butyrate kinase (Clocel_3674), thereby pulling carbon flux towards butyrate production. In parallel, to enhance ethanol production, the expression of a putative hydrogenase gene (Clocel_2243) was down-regulated using CRISPR interference (CRISPRi). Simultaneously, genes involved in organic acids reassimilation (ctfAB, cbei_3833/3834) and pentose utilization (xylR, cbei_2385 and xylT, cbei_0109) were engineered in C. beijerinckii to enhance solvent production. The engineered twin-clostridia consortium was shown to decompose 83.2g/L of AECC and produce 22.1g/L of solvents (4.25g/L acetone, 11.5g/L butanol and 6.37g/L ethanol). This titer of acetone-butanol-ethanol (ABE) approximates to that achieved from a starchy feedstock. The developed twin-clostridial consortium serves as a promising platform for ABE fermentation from lignocellulose by CBP. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  10. Simultaneous F 0-F 1 modifications of Arabic for the improvement of natural-sounding

    NASA Astrophysics Data System (ADS)

    Ykhlef, F.; Bensebti, M.

    2013-03-01

    Pitch (F 0) modification is one of the most important problems in the area of speech synthesis. Several techniques have been developed in the literature to achieve this goal. The main restrictions of these techniques are in the modification range and the synthesised speech quality, intelligibility and naturalness. The control of formants in a spoken language can significantly improve the naturalness of the synthesised speech. This improvement is mainly dependent on the control of the first formant (F 1). Inspired by this observation, this article proposes a new approach that modifies both F 0 and F 1 of Arabic voiced sounds in order to improve the naturalness of the pitch shifted speech. The developed strategy takes a parallel processing approach, in which the analysis segments are decomposed into sub-bands in the wavelet domain, modified in the desired sub-band by using a resampling technique and reconstructed without affecting the remained sub-bands. Pitch marking and voicing detection are performed in the frequency decomposition step based on the comparison of the multi-level approximation and detail signals. The performance of the proposed technique is evaluated by listening tests and compared to the pitch synchronous overlap and add (PSOLA) technique in the third approximation level. Experimental results have shown that the manipulation in the wavelet domain of F 0 in conjunction with F 1 guarantees natural-sounding of the synthesised speech compared to the classical pitch modification technique. This improvement was appropriate for high pitch modifications.

  11. Long-term litter decomposition controlled by manganese redox cycling

    PubMed Central

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-01-01

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954

  12. Characteristics of dissolved organic matter in the Upper Klamath River, Lost River, and Klamath Straits Drain, Oregon and California

    USGS Publications Warehouse

    Goldman, Jami H.; Sullivan, Annett B.

    2017-12-11

    Concentrations of particulate organic carbon (POC) and dissolved organic carbon (DOC), which together comprise total organic carbon, were measured in this reconnaissance study at sampling sites in the Upper Klamath River, Lost River, and Klamath Straits Drain in 2013–16. Optical absorbance and fluorescence properties of dissolved organic matter (DOM), which contains DOC, also were analyzed. Parallel factor analysis was used to decompose the optical fluorescence data into five key components for all samples. Principal component analysis (PCA) was used to investigate differences in DOM source and processing among sites.At all sites in this study, average DOC concentrations were higher than average POC concentrations. The highest DOC concentrations were at sites in the Klamath Straits Drain and at Pump Plant D. Evaluation of optical properties indicated that Klamath Straits Drain DOM had a refractory, terrestrial source, likely extracted from the interaction of this water with wetland peats and irrigated soils. Pump Plant D DOM exhibited more labile characteristics, which could, for instance, indicate contributions from algal or microbial exudates. The samples from Klamath River also had more microbial or algal derived material, as indicated by PCA analysis of the optical properties. Most sites, except Pump Plant D, showed a linear relation between fluorescent dissolved organic matter (fDOM) and DOC concentration, indicating these measurements are highly correlated (R2=0.84), and thus a continuous fDOM probe could be used to estimate DOC loads from these sites.

  13. Long-term litter decomposition controlled by manganese redox cycling.

    PubMed

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  14. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Solomon, Jeffrey Michael (Inventor); Ghuman, Parminder Singh (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  15. Decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes.

    PubMed

    Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu

    2017-02-01

    3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H 2 O 2 ) and UV/titanium dioxide (TiO 2 ) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H 2 O 2 and UV/TiO 2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO 3 - , Cl - , SO 4 2- , HCO 3 - , and CO 3 2- inhibited the degradation of 3,5-dinitrobenzamide during the UV/H 2 O 2 and UV/TiO 2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO 2 , H 2 O, and other inorganic anions. Ions such as NH 4 + , NO 3 - , and NO 2 - were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H 2 O 2 and UV/TiO 2 processes was proposed.

  16. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  17. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  18. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  19. Plasma plume oscillations monitoring during laser welding of stainless steel by discrete wavelet transform application.

    PubMed

    Sibillano, Teresa; Ancona, Antonio; Rizzi, Domenico; Lupo, Valentina; Tricarico, Luigi; Lugarà, Pietro Mario

    2010-01-01

    The plasma optical radiation emitted during CO2 laser welding of stainless steel samples has been detected with a Si-PIN photodiode and analyzed under different process conditions. The discrete wavelet transform (DWT) has been used to decompose the optical signal into various discrete series of sequences over different frequency bands. The results show that changes of the process settings may yield different signal features in the range of frequencies between 200 Hz and 30 kHz. Potential applications of this method to monitor in real time the laser welding processes are also discussed.

  20. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  1. The influence of body position and microclimate on ketamine and metabolite distribution in decomposed skeletal remains.

    PubMed

    Cornthwaite, H M; Watterson, J H

    2014-10-01

    The influence of body position and microclimate on ketamine (KET) and metabolite distribution in decomposed bone tissue was examined. Rats received 75 mg/kg (i.p.) KET (n = 30) or remained drug-free (controls, n = 4). Following euthanasia, rats were divided into two groups and placed outdoors to decompose in one of the three positions: supine (SUP), prone (PRO) or upright (UPR). One group decomposed in a shaded, wooded microclimate (Site 1) while the other decomposed in an exposed sunlit microclimate with gravel substrate (Site 2), roughly 500 m from Site 1. Following decomposition, bones (lumbar vertebrae, thoracic vertebra, cervical vertebrae, rib, pelvis, femora, tibiae, humeri and scapulae) were collected and sorted for analysis. Clean, ground bones underwent microwave-assisted extraction using acetone : hexane mixture (1 : 1, v/v), followed by solid-phase extraction and analysis using GC-MS. Drug levels, expressed as mass normalized response ratios, were compared across all bone types between body position and microclimates. Bone type was a main effect (P < 0.05) for drug level and drug/metabolite level ratio for all body positions and microclimates examined. Microclimate and body position significantly influenced observed drug levels: higher levels were observed in carcasses decomposing in direct sunlight, where reduced entomological activity led to slowed decomposition. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Integrating microbial physiology and enzyme traits in the quality model

    NASA Astrophysics Data System (ADS)

    Sainte-Marie, Julien; Barrandon, Matthieu; Martin, Francis; Saint-André, Laurent; Derrien, Delphine

    2017-04-01

    Microbe activity plays an undisputable role in soil carbon storage and there have been many calls to integrate microbial ecology in soil carbon (C) models. With regard to this challenge, a few trait-based microbial models of C dynamics have emerged during the past decade. They parameterize specific traits related to decomposer physiology (substrate use efficiency, growth and mortality rates...) and enzyme properties (enzyme production rate, catalytic properties of enzymes…). But these models are built on the premise that organic matter (OM) can be represented as one single entity or are divided into a few pools, while organic matter exists as a continuum of many different compounds spanning from intact plant molecules to highly oxidised microbial metabolites. In addition, a given molecule may also exist in different forms, depending on its stage of polymerization or on its interactions with other organic compounds or mineral phases of the soil. Here we develop a general theoretical model relating the evolution of soil organic matter, as a continuum of progressively decomposing compounds, with decomposer activity and enzyme traits. The model is based on the notion of quality developed by Agren and Bosatta (1998), which is a measure of molecule accessibility to degradation. The model integrates three major processes: OM depolymerisation by enzyme action, OM assimilation and OM biotransformation. For any enzyme, the model reports the quality range where this enzyme selectively operates and how the initial quality distribution of the OM subset evolves into another distribution of qualities under the enzyme action. The model also defines the quality range where the OM can be uptaken and assimilated by microbes. It finally describes how the quality of the assimilated molecules is transformed into another quality distribution, corresponding to the decomposer metabolites signature. Upon decomposer death, these metabolites return to the substrate. We explore here the how microbial physiology and enzyme traits can be incorporated in a model based on a continuous representation of the organic matter and evaluate how it can improve our ability to predict soil C cycling. To do so, we analyse the properties of the model by implementing different scenarii and test the sensitivity of its parameters. Agren, G. I., & Bosatta, E. (1998). Theoretical ecosystem ecology: understanding element cycles. Cambridge University Press.

  3. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    PubMed

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  4. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  5. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  6. Toxicity to woodlice of zinc and lead oxides added to soil litter

    USGS Publications Warehouse

    Beyer, W.N.; Anderson, A.

    1985-01-01

    Previous studies have shown that high concentrations of metals in soil are associated with reductions in decomposer populations. We have here determined the relation between the concentrations of lead and zinc added as oxides to soil litter and the survival and reproduction of a decomposer population under controlled conditions. Laboratory populations of woodlice (Porcellio scaber Latr) were fed soil litter treated with lead or zinc at concentrations that ranged from 100 to 12,800 ppm. The survival of the adults, the maximum number of young alive, and the average number of young alive, were recorded over 64 weeks. Lead at 12,800 ppm and zinc at 1,600 ppm or more had statistically significant (p < 0.05) negative effects on the populations. These results agree with field observations suggesting that lead and zinc have reduced populations of decomposers in contaminated forest soil litter, and concentrations are similar to those reported to be associated with reductions in natural populations of decomposers. Poisoning of decomposers may disrupt nutrient cycling, reduce the numbers of invertebrates available to other wildlife for food, and contribute to the contamination of food chains.

  7. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  8. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  9. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  10. Properties of Soil Pore Space Regulate Pathways of Plant Residue Decomposition and Community Structure of Associated Bacteria

    PubMed Central

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.; Marsh, Terence L.; Hildebrandt, Britton; Rivers, Mark L.

    2015-01-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO2 emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis of amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO2 emission constituted 1,200 µm C g-1 soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO2 emission constituted 2,000 µm C g-1 soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO2 emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes. PMID:25909444

  11. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria

    DOE PAGES

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.; ...

    2015-07-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO₂ emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis ofmore » amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO₂ emission constituted 1,200 µm C g⁻¹ soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO₂ emission constituted 2,000 µm C g⁻¹ soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO₂ emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.« less

  12. Soil chemistry changes beneath decomposing cadavers over a one-year period.

    PubMed

    Szelecz, Ildikó; Koenig, Isabelle; Seppey, Christophe V W; Le Bayon, Renée-Claire; Mitchell, Edward A D

    2018-05-01

    Decomposing vertebrate cadavers release large, localized inputs of nutrients. These temporally limited resource patches affect nutrient cycling and soil organisms. The impact of decomposing cadavers on soil chemistry is relevant to soil biology, as a natural disturbance, and forensic science, to estimate the postmortem interval. However, cadaver impacts on soils are rarely studied, making it difficult to identify common patterns. We investigated the effects of decomposing pig cadavers (Sus scrofa domesticus) on soil chemistry (pH, ammonium, nitrate, nitrogen, phosphorous, potassium and carbon) over a one-year period in a spruce-dominant forest. Four treatments were applied, each with five replicates: two treatments including pig cadavers (placed on the ground and hung one metre above ground) and two controls (bare soil and bags filled with soil placed on the ground i.e. "fake pig" treatment). In the first two months (15-59 days after the start of the experiment), cadavers caused significant increases of ammonium, nitrogen, phosphorous and potassium (p<0.05) whereas nitrate significantly increased towards the end of the study (263-367 days; p<0.05). Soil pH increased significantly at first and then decreased significantly at the end of the experiment. After one year, some markers returned to basal levels (i.e. not significantly different from control plots), whereas others were still significantly different. Based on these response patterns and in comparison with previous studies, we define three categories of chemical markers that may have the potential to date the time since death: early peak markers (EPM), late peak markers (LPM) and late decrease markers (LDM). The marker categories will enhance our understanding of soil processes and can be highly useful when changes in soil chemistry are related to changes in the composition of soil organism communities. For actual casework further studies and more data are necessary to refine the marker categories along a more precise timeline and to develop a method that can be used in court. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria.

    PubMed

    Negassa, Wakene C; Guber, Andrey K; Kravchenko, Alexandra N; Marsh, Terence L; Hildebrandt, Britton; Rivers, Mark L

    2015-01-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO2 emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis of amplified 16S-18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75-80% of the added plant residue was decomposed, cumulative CO2 emission constituted 1,200 µm C g(-1) soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO2 emission constituted 2,000 µm C g(-1) soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO2 emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.

  14. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO₂ emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis ofmore » amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO₂ emission constituted 1,200 µm C g⁻¹ soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO₂ emission constituted 2,000 µm C g⁻¹ soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO₂ emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.« less

  15. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  16. Energy index decomposition methodology at the plant level

    NASA Astrophysics Data System (ADS)

    Kumphai, Wisit

    Scope and method of study. The dissertation explores the use of a high level energy intensity index as a facility-level energy performance monitoring indicator with a goal of developing a methodology for an economically based energy performance monitoring system that incorporates production information. The performance measure closely monitors energy usage, production quantity, and product mix and determines the production efficiency as a part of an ongoing process that would enable facility managers to keep track of and, in the future, be able to predict when to perform a recommissioning process. The study focuses on the use of the index decomposition methodology and explored several high level (industry, sector, and country levels) energy utilization indexes, namely, Additive Log Mean Divisia, Multiplicative Log Mean Divisia, and Additive Refined Laspeyres. One level of index decomposition is performed. The indexes are decomposed into Intensity and Product mix effects. These indexes are tested on a flow shop brick manufacturing plant model in three different climates in the United States. The indexes obtained are analyzed by fitting an ARIMA model and testing for dependency between the two decomposed indexes. Findings and conclusions. The results concluded that the Additive Refined Laspeyres index decomposition methodology is suitable to use on a flow shop, non air conditioned production environment as an energy performance monitoring indicator. It is likely that this research can be further expanded in to predicting when to perform a recommissioning process.

  17. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  18. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  19. Parallels between a Collaborative Research Process and the Middle Level Philosophy

    ERIC Educational Resources Information Center

    Dever, Robin; Ross, Diane; Miller, Jennifer; White, Paula; Jones, Karen

    2014-01-01

    The characteristics of the middle level philosophy as described in This We Believe closely parallel the collaborative research process. The journey of one research team is described in relationship to these characteristics. The collaborative process includes strengths such as professional relationships, professional development, courageous…

  20. Multi-component Wronskian solution to the Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Sun, Fu-Wei; Zhang, Yi; Li, Juan

    2014-01-01

    It is known that the Kadomtsev-Petviashvili (KP) equation can be decomposed into the first two members of the coupled Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy by the binary non-linearization of Lax pairs. In this paper, we construct the N-th iterated Darboux transformation (DT) for the second- and third-order m-coupled AKNS systems. By using together the N-th iterated DT and Cramer's rule, we find that the KPII equation has the unreduced multi-component Wronskian solution and the KPI equation admits a reduced multi-component Wronskian solution. In particular, based on the unreduced and reduced two-component Wronskians, we obtain two families of fully-resonant line-soliton solutions which contain arbitrary numbers of asymptotic solitons as y → ∓∞ to the KPII equation, and the ordinary N-soliton solution to the KPI equation. In addition, we find that the KPI line solitons propagating in parallel can exhibit the bound state at the moment of collision.

  1. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  2. A Self-Binding, Melt-Castable, Crystalline Organic Electrolyte for Sodium Ion Conduction.

    PubMed

    Chinnam, Parameswara Rao; Fall, Birane; Dikin, Dmitriy A; Jalil, AbdelAziz; Hamilton, Clifton R; Wunder, Stephanie L; Zdilla, Michael J

    2016-12-05

    The preparation and characterization of the cocrystalline solid-organic sodium ion electrolyte NaClO 4 (DMF) 3 (DMF=dimethylformamide) is described. The crystal structure of NaClO 4 (DMF) 3 reveals parallel channels of Na + and ClO 4 - ions. Pressed pellets of microcrystalline NaClO 4 (DMF) 3 exhibit a conductivity of 3×10 -4  S cm -1 at room temperature with a low activation barrier to conduction of 25 kJ mol -1 . SEM revealed thin liquid interfacial contacts between crystalline grains, which promote conductivity. The material melts gradually between 55-65 °C, but does not decompose, and upon cooling, it resolidifies as solid NaClO 4 (DMF) 3 , permitting melt casting of the electrolyte into thin films and the fabrication of cells in the liquid state and ensuring penetration of the electrolyte between the electrode active particles. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  4. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  5. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  6. Development and parameter identification of a visco-hyperelastic model for the periodontal ligament.

    PubMed

    Huang, Huixiang; Tang, Wencheng; Tan, Qiyan; Yan, Bin

    2017-04-01

    The present study developed and implemented a new visco-hyperelastic model that is capable of predicting the time-dependent biomechanical behavior of the periodontal ligament. The constitutive model has been implemented into the finite element package ABAQUS by means of a user-defined material subroutine (UMAT). The stress response is decomposed into two constitutive parts in parallel which are a hyperelastic and a time-dependent viscoelastic stress response. In order to identify the model parameters, the indentation equation based on V-W hyperelastic model and the indentation creep model are developed. Then the parameters are determined by fitting them to the corresponding nanoindentation experimental data of the PDL. The nanoindentation experiment was simulated by finite element analysis to validate the visco-hyperelastic model. The simulated results are in good agreement with the experimental data, which demonstrates that the visco-hyperelastic model developed is able to accurately predict the time-dependent mechanical behavior of the PDL. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Removal of suspended solids and turbidity from marble processing wastewaters by electrocoagulation: comparison of electrode materials and electrode connection systems.

    PubMed

    Solak, Murat; Kiliç, Mehmet; Hüseyin, Yazici; Sencan, Aziz

    2009-12-15

    In this study, removal of suspended solids (SS) and turbidity from marble processing wastewaters by electrocoagulation (EC) process were investigated by using aluminium (Al) and iron (Fe) electrodes which were run in serial and parallel connection systems. To remove these pollutants from the marble processing wastewater, an EC reactor including monopolar electrodes (Al/Fe) in parallel and serial connection system, was utilized. Optimization of differential operation parameters such as pH, current density, and electrolysis time on SS and turbidity removal were determined in this way. EC process with monopolar Al electrodes in parallel and serial connections carried out at the optimum conditions where the pH value was 9, current density was approximately 15 A/m(2), and electrolysis time was 2 min resulted in 100% SS removal. Removal efficiencies of EC process for SS with monopolar Fe electrodes in parallel and serial connection were found to be 99.86% and 99.94%, respectively. Optimum parameters for monopolar Fe electrodes in both of the connection types were found to be for pH value as 8, for electrolysis time as 2 min. The optimum current density value for Fe electrodes used in serial and parallel connections was also obtained at 10 and 20 A/m(2), respectively. Based on the results obtained, it was found that EC process running with each type of the electrodes and the connections was highly effective for the removal of SS and turbidity from marble processing wastewaters, and that operating costs with monopolar Al electrodes in parallel connection were the cheapest than that of the serial connection and all the configurations for Fe electrode.

  8. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Huy Q.; Molloi, Sabee

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less

  10. Stress and decision making: neural correlates of the interaction between stress, executive functions, and decision making under risk.

    PubMed

    Gathmann, Bettina; Schulte, Frank P; Maderwald, Stefan; Pawlikowski, Mirko; Starcke, Katrin; Schäfer, Lena C; Schöler, Tobias; Wolf, Oliver T; Brand, Matthias

    2014-03-01

    Stress and additional load on the executive system, produced by a parallel working memory task, impair decision making under risk. However, the combination of stress and a parallel task seems to preserve the decision-making performance [e.g., operationalized by the Game of Dice Task (GDT)] from decreasing, probably by a switch from serial to parallel processing. The question remains how the brain manages such demanding decision-making situations. The current study used a 7-tesla magnetic resonance imaging (MRI) system in order to investigate the underlying neural correlates of the interaction between stress (induced by the Trier Social Stress Test), risky decision making (GDT), and a parallel executive task (2-back task) to get a better understanding of those behavioral findings. The results show that on a behavioral level, stressed participants did not show significant differences in task performance. Interestingly, when comparing the stress group (SG) with the control group, the SG showed a greater increase in neural activation in the anterior prefrontal cortex when performing the 2-back task simultaneously with the GDT than when performing each task alone. This brain area is associated with parallel processing. Thus, the results may suggest that in stressful dual-tasking situations, where a decision has to be made when in parallel working memory is demanded, a stronger activation of a brain area associated with parallel processing takes place. The findings are in line with the idea that stress seems to trigger a switch from serial to parallel processing in demanding dual-tasking situations.

  11. An optimality framework to predict decomposer carbon-use efficiency trends along stoichiometric gradients

    NASA Astrophysics Data System (ADS)

    Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.

    2016-12-01

    Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects, although in some subsets of data, temperature and substrate stoichiometry appeared to interact. Based on our results, the optimality principle can provide a solid (but still incomplete) framework to develop CUE models for large-scale applications.

  12. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  13. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  14. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  15. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  16. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  17. American option pricing in Gauss-Markov interest rate models

    NASA Astrophysics Data System (ADS)

    Galluccio, Stefano

    1999-07-01

    In the context of Gaussian non-homogeneous interest-rate models, we study the problem of American bond option pricing. In particular, we show how to efficiently compute the exercise boundary in these models in order to decompose the price as a sum of a European option and an American premium. Generalizations to coupon-bearing bonds and jump-diffusion processes for the interest rates are also discussed.

  18. Molybdenum enhanced low-temperature deposition of crystalline silicon nitride

    DOEpatents

    Lowden, R.A.

    1994-04-05

    A process for chemical vapor deposition of crystalline silicon nitride is described which comprises the steps of: introducing a mixture of a silicon source, a molybdenum source, a nitrogen source, and a hydrogen source into a vessel containing a suitable substrate; and thermally decomposing the mixture to deposit onto the substrate a coating comprising crystalline silicon nitride containing a dispersion of molybdenum silicide. 5 figures.

  19. Hidden Statistics of Schroedinger Equation

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.

  20. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1994-01-01

    DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.

  1. Wood decomposition following clearcutting at Coweeta Hydrologic Laboratory

    Treesearch

    Kim G. Mattson; Wayne T. Swank

    2014-01-01

    Most of the forest on Watershed (WS) 7 was cut and ledt on site to decompose. This Chapter describes the rate and manner of wood decomposition and also quantifies the fluxes from decaying wood to the forest floor on WS 7. In doing so, we make the case that wood and its process of decomposition contributes to ecosystem stability. We also review some of the history of...

  2. Catalytic Ignition of Ionic Liquid Fuels by Ionic Liquids

    DTIC Science & Technology

    2014-07-01

    catalytically decompose hydrogen peroxide. Catalytic approach for H2O2 decomposition Distribution NOT APPROVED through STINFO process Distribution...Charts 3. DATES COVERED (From - To) July 2014- August 2014 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER In-House Catalytic Ignition of Ionic...are highly hazardous. To gain a true advantage, a more environmentally friendly oxidizer must be considered. Hydrogen peroxide might be an attractive

  3. Using pattern based layout comparison for a quick analysis of design changes

    NASA Astrophysics Data System (ADS)

    Huang, Lucas; Yang, Legender; Kan, Huan; Zou, Elain; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang

    2018-03-01

    A design usually goes through several versions until achieving a most successful one. These changes between versions are not a complete substitution but a continual improvement, either fixing the known issues of its prior versions (engineering change order) or a more optimized design substitution of a portion of the design. On the manufacturing side, process engineers care more about the design pattern changes because any new pattern occurrence may be a killer of the yield. An effective and efficient way to narrow down the diagnosis scope appeals to the engineers. What is the best approach of comparing two layouts? A direct overlay of two layouts may not always work as even though most of the design instances will be kept in the layout from version to version, the actual placements may be different. An alternative way, pattern based layout comparison, comes to play. By expanding this application, it makes it possible to transfer the learning in one cycle to another and accelerate the process of failure analysis. This paper presents a solution to compare two layouts by using Calibre DRC and Pattern Matching. The key step in this flow is layout decomposition. In theory, with a fixed pattern size, a layout can always be decomposed into limited number of patterns by moving the pattern center around the layout, the number is limited but may be huge if the layout is not processed smartly! A mathematical answer is not what we are looking for but an engineering solution is more desired. Layouts must be decomposed into patterns with physical meaning in a smart way. When a layout is decomposed and patterns are classified, a pattern library with unique patterns inside is created for that layout. After individual pattern libraries for each layout are created, run pattern comparison utility provided by Calibre Pattern Matching to compare the pattern libraries, unique patterns will come out for each layout. This paper illustrates this flow in details and demonstrates the advantage of combining Calibre DRC and Calibre Pattern Matching.

  4. High speed infrared imaging system and method

    DOEpatents

    Zehnder, Alan T.; Rosakis, Ares J.; Ravichandran, G.

    2001-01-01

    A system and method for radiation detection with an increased frame rate. A semi-parallel processing configuration is used to process a row or column of pixels in a focal-plane array in parallel to achieve a processing rate up to and greater than 1 million frames per second.

  5. Idle waves in high-performance computing

    NASA Astrophysics Data System (ADS)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  6. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  7. Expressing Parallelism with ROOT

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  8. Expressing Parallelism with ROOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piparo, D.; Tejedor, E.; Guiraud, E.

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less

  9. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  10. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  11. Precipitation of lamellar gold nanocrystals in molten polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palomba, M.; Carotenuto, G., E-mail: giancaro@unina.it

    Non-aggregated lamellar gold crystals with regular shape (triangles, squares, pentagons, etc.) have been produced by thermal decomposition of gold chloride (AuCl) molecules in molten amorphous polymers (polystyrene and poly(methyl methacrylate)). Such covalent inorganic gold salt is high soluble into non-polar polymers and it thermally decomposes at temperatures compatible with the polymer thermal stability, producing gold atoms and chlorine radicals. At the end of the gold precipitation process, the polymer matrix resulted chemically modified because of the partial cross-linking process due to the gold atom formation reaction.

  12. Hydrogen and elemental carbon production from natural gas and other hydrocarbons

    DOEpatents

    Detering, Brent A.; Kong, Peter C.

    2002-01-01

    Diatomic hydrogen and unsaturated hydrocarbons are produced as reactor gases in a fast quench reactor. During the fast quench, the unsaturated hydrocarbons are further decomposed by reheating the reactor gases. More diatomic hydrogen is produced, along with elemental carbon. Other gas may be added at different stages in the process to form a desired end product and prevent back reactions. The product is a substantially clean-burning hydrogen fuel that leaves no greenhouse gas emissions, and elemental carbon that may be used in powder form as a commodity for several processes.

  13. Periodic, On-Demand, and User-Specified Information Reconciliation

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.

  14. Process for converting magnesium fluoride to calcium fluoride

    DOEpatents

    Kreuzmann, A.B.; Palmer, D.A.

    1984-12-21

    This invention is a process for the conversion of magnesium fluoride to calcium fluoride whereby magnesium fluoride is decomposed by heating in the presence of calcium carbonate, calcium oxide or calcium hydroxide. Magnesium fluoride is a by-product of the reduction of uranium tetrafluoride to form uranium metal and has no known commercial use, thus its production creates a significant storage problem. The advantage of this invention is that the quality of calcium fluoride produced is sufficient to be used in the industrial manufacture of anhydrous hydrogen fluoride, steel mill flux or ceramic applications.

  15. Object-oriented software for evaluating measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Hall, B. D.

    2013-05-01

    An earlier publication (Hall 2006 Metrologia 43 L56-61) introduced the notion of an uncertain number that can be used in data processing to represent quantity estimates with associated uncertainty. The approach can be automated, allowing data processing algorithms to be decomposed into convenient steps, so that complicated measurement procedures can be handled. This paper illustrates the uncertain-number approach using several simple measurement scenarios and two different software tools. One is an extension library for Microsoft Excel®. The other is a special-purpose calculator using the Python programming language.

  16. Process and apparatus for obtaining silicon from fluosilicic acid

    DOEpatents

    Sanjurjo, Angel

    1988-06-28

    Process and apparatus for producing low cost, high purity solar grade silicon ingots in single crystal or quasi single crystal ingot form in a substantially continuous operation in a two stage reactor starting with sodium fluosilicate and a metal more electropositive than silicon (preferably sodium) in separate compartments having easy vapor transport therebetween and thermally decomposing the sodium fluosilicate to cause formation of substantially pure silicon and a metal fluoride which may be continuously separated in the melt and silicon may be directly and continuously cast from the melt.

  17. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  18. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  19. Measurement of in vivo stress resultants in neurulation-stage amphibian embryos.

    PubMed

    Benko, Richard; Brodland, G Wayne

    2007-04-01

    In order to obtain the first quantitative measurements of the in vivo stresses in early-stage amphibian embryos, we developed a novel instrument that uses a pair of parallel wires that are glued to the surface of an embryo normal to the direction in which the stress is to be determined. When a slit is made parallel to the wires and between them, tension in the surrounding tissue causes the slit to open. Under computer control, one of the wires is moved so as to restore the original wire spacing, and the steady-state closure force is determined from the degree of wire flexure. A cell-level finite element model is used to convert the wire bending force to an in-plane stress since the wire force is not proportional to the slit length. The device was used to measure stress resultants (force carried per unit of slit length) on the dorsal, ventral and lateral aspects of neurulation-stage axolotl (Ambystoma mexicanum) embryos. The resultants were anisotropic and varied with location and developmental stage, with values ranging from -0.17 mN/m to 1.92 mN/m. In general, the resultants could be decomposed into patterns associated with internal pressure in the embryo, bending of the embryo along its mid-sagittal plane and neural tube closure. The patterns of stress revealed by the experiments support a number of current theories about the mechanics of neurulation.

  20. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    PubMed

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  1. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  2. Constituent order and semantic parallelism in online comprehension: eye-tracking evidence from German.

    PubMed

    Knoeferle, Pia; Crocker, Matthew W

    2009-12-01

    Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.

  3. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  4. A comparative study of the microstructures observed in statically cast and continuously cast Bi-In-Sn ternary eutectic alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, S.; Soda, H.; McLean, A.

    2000-01-01

    A ternary eutectic alloy with a composition of 57.2 pct Bi, 24.8 pct In, and 18 pct Sn was continuously cast into wire of 2 mm diameter with casting speeds of 14 and 79 mm/min using the Ohno Continuous Casting (OCC) process. The microstructures obtained were compared with those of statically cast specimens. Extensive segregation of massive Bi blocks, Bi complex structures, and tin-rich dendrites was found in specimens that were statically cast. Decomposition of {radical}Sn by a eutectoid reaction was confirmed based on microstructural evidence. Ternary eutectic alloy with a cooling rate of approximately 1 C/min formed a doublemore » binary eutectic. The double binary eutectic consisted of regions of BiIn and decomposed {radical}Sn in the form of a dendrite cell structure and regions of Bi and decomposed {radical}Sn in the form of a complex-regular cell. The Bi complex-regular cells, which are a ternary eutectic constituent, existed either along the boundaries of the BiIn-decomposed {radical}Sn dendrite cells or at the front of elongated dendrite cell structures. In the continuously cast wires, primary Sn dendrites coupled with a small Bi phase were uniformly distributed within the Bi-In alloy matrix. Neither massive Bi phase, Bi complex-regular cells, no BiIn eutectic dendrite cells were observed, resulting in a more uniform microstructure in contrast to the heavily segregated structures of the statically cast specimens.« less

  5. Simulated nitrogen deposition affects wood decomposition by cord-forming fungi.

    PubMed

    Bebber, Daniel P; Watkinson, Sarah C; Boddy, Lynne; Darrah, Peter R

    2011-12-01

    Anthropogenic nitrogen (N) deposition affects many natural processes, including forest litter decomposition. Saprotrophic fungi are the only organisms capable of completely decomposing lignocellulosic (woody) litter in temperate ecosystems, and therefore the responses of fungi to N deposition are critical in understanding the effects of global change on the forest carbon cycle. Plant litter decomposition under elevated N has been intensively studied, with varying results. The complexity of forest floor biota and variability in litter quality have obscured N-elevation effects on decomposers. Field experiments often utilize standardized substrates and N-levels, but few studies have controlled the decay organisms. Decomposition of beech (Fagus sylvatica) blocks inoculated with two cord-forming basidiomycete fungi, Hypholoma fasciculare and Phanerochaete velutina, was compared experimentally under realistic levels of simulated N deposition at Wytham Wood, Oxfordshire, UK. Mass loss was greater with P. velutina than with H. fasciculare, and with N treatment than in the control. Decomposition was accompanied by growth of the fungal mycelium and increasing N concentration in the remaining wood. We attribute the N effect on wood decay to the response of cord-forming wood decay fungi to N availability. Previous studies demonstrated the capacity of these fungi to scavenge and import N to decaying wood via a translocating network of mycelium. This study shows that small increases in N availability can increase wood decomposition by these organisms. Dead wood is an important carbon store and habitat. The responses of wood decomposers to anthropogenic N deposition should be considered in models of forest carbon dynamics.

  6. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J. A. M.; Jiang, J.; Post, W. M.

    Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less

  8. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  9. Linkages between below and aboveground communities: Decomposer responses to simulated tree species loss are largely additive.

    Treesearch

    Becky A. Ball; Mark A. Bradford; Dave C. Coleman; Mark D. Hunter

    2009-01-01

    Inputs of aboveground plant litter influence the abundance and activities of belowground decomposer biota. Litter-mixing studies have examined whether the diversity and heterogeneity of litter inputs...

  10. Tunable color parallel tandem organic light emitting devices with carbon nanotube and metallic sheet interlayers

    NASA Astrophysics Data System (ADS)

    Oliva, Jorge; Papadimitratos, Alexios; Desirena, Haggeo; De la Rosa, Elder; Zakhidov, Anvar A.

    2015-11-01

    Parallel tandem organic light emitting devices (OLEDs) were fabricated with transparent multiwall carbon nanotube sheets (MWCNT) and thin metal films (Al, Ag) as interlayers. In parallel monolithic tandem architecture, the MWCNT (or metallic films) interlayers are an active electrode which injects similar charges into subunits. In the case of parallel tandems with common anode (C.A.) of this study, holes are injected into top and bottom subunits from the common interlayer electrode; whereas in the configuration of common cathode (C.C.), electrons are injected into the top and bottom subunits. Both subunits of the tandem can thus be monolithically connected functionally in an active structure in which each subunit can be electrically addressed separately. Our tandem OLEDs have a polymer as emitter in the bottom subunit and a small molecule emitter in the top subunit. We also compared the performance of the parallel tandem with that of in series and the additional advantages of the parallel architecture over the in-series were: tunable chromaticity, lower voltage operation, and higher brightness. Finally, we demonstrate that processing of the MWCNT sheets as a common anode in parallel tandems is an easy and low cost process, since their integration as electrodes in OLEDs is achieved by simple dry lamination process.

  11. Relationship between mathematical abstraction in learning parallel coordinates concept and performance in learning analytic geometry of pre-service mathematics teachers: an investigation

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2018-05-01

    As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.

  12. Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert

    2002-01-01

    The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.

  13. A bio-anodic filter facilitated entrapment, decomposition and in situ oxidation of algal biomass in wastewater effluent.

    PubMed

    Mohammadi Khalfbadam, Hassan; Cheng, Ka Yu; Sarukkalige, Ranjan; Kaksonen, Anna H; Kayaalp, Ahmet S; Ginige, Maneesha P

    2016-09-01

    This study examined for the first time the use of bioelectrochemical systems (BES) to entrap, decompose and oxidise fresh algal biomass from an algae-laden effluent. The experimental process consisted of a photobioreactor for a continuous production of the algal-laden effluent, and a two-chamber BES equipped with anodic graphite granules and carbon-felt to physically remove and oxidise algal biomass from the influent. Results showed that the BES filter could retain ca. 90% of the suspended solids (SS) loaded. A coulombic efficiency (CE) of 36.6% (based on particulate chemical oxygen demand (PCOD) removed) was achieved, which was consistent with the highest CEs of BES studies (operated in microbial fuel cell mode (MFC)) that included additional pre-treatment steps for algae hydrolysis. Overall, this study suggests that a filter type BES anode can effectively entrap, decompose and in situ oxidise algae without the need for a separate pre-treatment step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Electron attachment to trinitrotoluene (TNT) embedded in He droplets: complete freezing of dissociation intermediates in an extended range of electron energies.

    PubMed

    Mauracher, Andreas; Schöbel, Harald; Ferreira da Silva, Filipe; Edtbauer, Achim; Mitterdorfer, Christian; Denifl, Stephan; Märk, Tilmann D; Illenberger, Eugen; Scheier, Paul

    2009-10-01

    Electron attachment to the explosive trinitrotoluene (TNT) embedded in Helium droplets (TNT@He) generates the non-decomposed complexes (TNT)(n)(-), but no fragment ions in the entire energy range 0-12 eV. This strongly contrasts the behavior of single TNT molecules in the gas phase at ambient temperatures, where electron capture leads to a variety of different fragmentation products via different dissociative electron attachment (DEA) reactions. Single TNT molecules decompose by attachment of an electron at virtually no extra energy reflecting the explosive nature of the compound. The complete freezing of dissociation intermediates in TNT embedded in the droplet is explained by the particular mechanisms of DEA in nitrobenzenes, which is characterized by complex rearrangement processes in the transient negative ion (TNI) prior to decomposition. These mechanisms provide the condition for effective energy withdrawal from the TNI into the dissipative environment thereby completely suppressing its decomposition.

  15. Optimal reconstruction of the states in qutrit systems

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Yang, Ming; Cao, Zhuo-Liang

    2010-10-01

    Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.

  16. Children's understanding of idioms and theory of mind development.

    PubMed

    Caillies, Stéphanie; Le Sourn-Bissaoui, Sandrine

    2008-09-01

    The aim of this study was to test the hypothesis according to which theory of mind competence was a prerequisite to ambiguous idioms understanding. We hypothesized that the child needs to understand that the literal interpretation could be a false world representation, a false belief, and that the speaker's intention is to mean something else, to correctly process idiomatic expressions. Two kinds of ambiguous idioms were of interest: decomposable and nondecomposable expressions (Titone & Connine, 1999). An experiment was designed to assess the figurative developmental changes that occur with theory of mind competence. Five-, 6- and 7-year-old children performed five theory of mind tasks (an appearance-reality task, three false-belief tasks and a second-order false-belief task) and listened to decomposable and nondecomposable idiomatic expressions inserted in context, before performing a multiple choice task. Results indicated that only nondecomposable idiomatic expression was predicted from the theory of mind scores, and particularly from the second-order competences. Results are discussed with respect to theory of mind and verbal competences.

  17. Large-Area Chemical and Biological Decontamination Using a High Energy Arc Lamp (HEAL) System.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duty, Chad E; Smith, Rob R; Vass, Arpad Alexander

    2008-01-01

    Methods for quickly decontaminating large areas exposed to chemical and biological (CB) warfare agents can present significant logistical, manpower, and waste management challenges. Oak Ridge National Laboratory (ORNL) is pursuing an alternate method to decompose CB agents without the use of toxic chemicals or other potentially harmful substances. This process uses a high energy arc lamp (HEAL) system to photochemically decompose CB agents over large areas (12 m2). Preliminary tests indicate that more than 5 decades (99.999%) of an Anthrax spore simulant (Bacillus globigii) were killed in less than 7 seconds of exposure to the HEAL system. When combined withmore » a catalyst material (TiO2) the HEAL system was also effective against a chemical agent simulant, diisopropyl methyl phosphonate (DIMP). These results demonstrate the feasibility of a rapid, large-area chemical and biological decontamination method that does not require toxic or corrosive reagents or generate hazardous wastes.« less

  18. A facile self-assembly approach to prepare palladium/carbon nanotubes catalyst for the electro-oxidation of ethanol

    NASA Astrophysics Data System (ADS)

    Wen, Cuilian; Zhang, Xinyuan; Wei, Ying; Zhang, Teng; Chen, Changxin

    2018-02-01

    A facile self-assembly approach is reported to prepare palladium/carbon nanotubes (Pd/CNTs) catalyst for the electro-oxidation of ethanol. In this method, the Pd-oleate/CNTs was decomposed into the Pd/CNTs at an optimal temperature of 195 °C in air, in which no inert gas is needed for the thermal decomposition process due to the low temperature used and the decomposed products are also environmental friendly. The prepared Pd/CNTs catalyst has a high metallic Pd0 content and the Pd particles in the catalyst are disperse, uniform-sized with an average size of ˜2.1 nm, and evenly distributed on the CNTs. By employing our strategy, the problems including the exfoliation of the metal particles from the CNTs and the aggregation of the metal particles can be solved. Comparing with the commercial Pd/C one, the prepared Pd/CNTs catalyst exhibits a much higher electrochemical activity and stability for the electro-oxidation of ethanol in the direct ethanol fuel cells.

  19. Reforming and decomposition of glucose in an aqueous phase

    NASA Technical Reports Server (NTRS)

    Amin, S.; Reid, R. C.; Modell, M.

    1975-01-01

    Exploratory experiments have been carried out to study the decomposition of glucose, a typical carbohydrate, in a high temperature-high pressure water reactor. The objective of the study was to examine the feasibility of such a process to decompose cellulosic waste materials in long-term space missions. At temperatures below the critical point of water, glucose decomposed to form liquid products and char. Little gas was noted with or without reforming catalysts present. The rate of the primary glucose reaction increased significantly with temperature. Partial identification of the liquid phase was made and the C:H:O ratios determined for both the liquid and solid products. One of the more interesting results from this study was the finding that when glucose was injected into a reactor held at the critical temperature (and pressure) of water, no solid products formed. Gas production increased, but the majority of the carbon was found in soluble furans (and furan derivatives). This significant result is now being investigated further.

  20. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

Top