Task-discriminative space-by-time factorization of muscle activity
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213
Task-discriminative space-by-time factorization of muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.
Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien
2018-01-01
The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576
A unifying model of concurrent spatial and temporal modularity in muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2014-02-01
Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Task Decomposition Module For Telerobot Trajectory Generation
NASA Astrophysics Data System (ADS)
Wavering, Albert J.; Lumia, Ron
1988-10-01
A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.
Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian
2014-03-01
Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.
Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem
2006-06-01
Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.
Niederegger, Senta; Schermer, Julia; Höfig, Juliane; Mall, Gita
2015-01-01
Estimating time of death of buried human bodies is a very difficult task. Casper's rule from 1860 is still widely used which illustrates the lack of suitable methods. In this case study excavations in an arbor revealed the crouching body of a human being, dressed only in boxer shorts and socks. Witnesses were not able to generate a concise answer as to when the person in question was last seen alive; the pieces of information opened a window of 2-6 weeks for the possible time of death. To determine the post mortem interval (PMI) an experiment using a pig carcass was conducted to set up a decomposition matrix. Fitting the autopsy findings of the victim into the decomposition matrix yielded a time of death estimation of 2-3 weeks. This time frame was later confirmed by a new witness. The authors feel confident that a widespread conduction of decomposition matrices using pig carcasses can lead to a great increase of experience and knowledge in PMI estimation of buried bodies and will eventually lead to applicable new methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reactive Goal Decomposition Hierarchies for On-Board Autonomy
NASA Astrophysics Data System (ADS)
Hartmann, L.
2002-01-01
As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.
Uncertainty propagation in orbital mechanics via tensor decomposition
NASA Astrophysics Data System (ADS)
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Understanding neuromotor strategy during functional upper extremity tasks using symbolic dynamics.
Nathan, Dominic E; Guastello, Stephen J; Prost, Robert W; Jeutter, Dean C
2012-01-01
The ability to model and quantify brain activation patterns that pertain to natural neuromotor strategy of the upper extremities during functional task performance is critical to the development of therapeutic interventions such as neuroprosthetic devices. The mechanisms of information flow, activation sequence and patterns, and the interaction between anatomical regions of the brain that are specific to movement planning, intention and execution of voluntary upper extremity motor tasks were investigated here. This paper presents a novel method using symbolic dynamics (orbital decomposition) and nonlinear dynamic tools of entropy, self-organization and chaos to describe the underlying structure of activation shifts in regions of the brain that are involved with the cognitive aspects of functional upper extremity task performance. Several questions were addressed: (a) How is it possible to distinguish deterministic or causal patterns of activity in brain fMRI from those that are really random or non-contributory to the neuromotor control process? (b) Can the complexity of activation patterns over time be quantified? (c) What are the optimal ways of organizing fMRI data to preserve patterns of activation, activation levels, and extract meaningful temporal patterns as they evolve over time? Analysis was performed using data from a custom developed time resolved fMRI paradigm involving human subjects (N=18) who performed functional upper extremity motor tasks with varying time delays between the onset of intention and onset of actual movements. The results indicate that there is structure in the data that can be quantified through entropy and dimensional complexity metrics and statistical inference, and furthermore, orbital decomposition is sensitive in capturing the transition of states that correlate with the cognitive aspects of functional task performance.
Task decomposition for a multilimbed robot to work in reachable but unorientable space
NASA Technical Reports Server (NTRS)
Su, Chau; Zheng, Yuan F.
1991-01-01
Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.
NASA Astrophysics Data System (ADS)
Cork, Christopher; Miloslavsky, Alexander; Friedberg, Paul; Luk-Pat, Gerry
2013-04-01
Lithographers had hoped that single patterning would be enabled at the 20nm node by way of EUV lithography. However, due to delays in EUV readiness, double patterning with 193i lithography is currently relied upon for volume production for the 20nm node's metal 1 layer. At the 14nm and likely at the 10nm node, LE-LE-LE triple patterning technology (TPT) is one of the favored options [1,2] for patterning local interconnect and Metal 1 layers. While previous research has focused on TPT for contact mask, metal layers offer new challenges and opportunities, in particular the ability to decompose design polygons across more than one mask. The extra flexibility offered by the third mask and ability to leverage polygon stitching both serve to improve compliance. However, ensuring TPT compliance - the task of finding a 3-color mask decomposition for a design - is still a difficult task. Moreover, scalability concerns multiply the difficulty of triple patterning decomposition which is an NP-complete problem. Indeed previous work shows that network sizes above a few thousand nodes or polygons start to take significantly longer times to compute [3], making full chip decomposition for arbitrary layouts impractical. In practice Metal 1 layouts can be considered as two separate problem domains, namely: decomposition of standard cells and decomposition of IP blocks. Standard cells typically include only a few 10's of polygons and should be amenable to fast decomposition. Successive design iterations should resolve compliance issues and improve packing density. Density improvements are multiplied repeatedly as standard cells are placed multiple times. IP blocks, on the other hand, may involve very large networks. This paper evaluates multiple approaches to triple patterning decomposition for the Metal 1 layer. The benefits of polygon stitching, in particular, the ability to resolve commonly encountered non-compliant layout configurations and improve packing density, are weighed against the increased difficulty in finding an optimized, legal decomposition and coping with the increased scalability challenges.
Relation between SM-covers and SM-decompositions of Petri nets
NASA Astrophysics Data System (ADS)
Karatkevich, Andrei; Wiśniewski, Remigiusz
2015-12-01
A task of finding for a given Petri net a set of sequential components being able to represent together the behavior of the net arises often in formal analysis of Petri nets and in applications of Petri net to logical control. Such task can be met in two different variants: obtaining a Petri net cover or a decomposition. Petri net cover supposes that a set of the subnets of given net is selected, and the sequential nets forming a decomposition may have additional places, which do not belong to the decomposed net. The paper discusses difference and relations between two mentioned tasks and their results.
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Birkett, Emma E; Talcott, Joel B
2012-01-01
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
Task-level control for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid
1994-01-01
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Enhancements to the Design Manager's Aide for Intelligent Decomposition (DeMAID)
NASA Technical Reports Server (NTRS)
Rogers, James L.; Barthelemy, Jean-Francois M.
1992-01-01
This paper discusses the addition of two new enhancements to the program Design Manager's Aide for Intelligent Decomposition (DeMAID). DeMAID is a knowledge-based tool used to aid a design manager in understanding the interactions among the tasks of a complex design problem. This is done by ordering the tasks to minimize feedback, determining the participating subsystems, and displaying them in an easily understood format. The two new enhancements include (1) rules for ordering a complex assembly process and (2) rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.
Enhancements to the Design Manager's Aide for Intelligent Decomposition (DeMaid)
NASA Technical Reports Server (NTRS)
Rogers, James L.; Barthelemy, Jean-Francois M.
1992-01-01
This paper discusses the addition of two new enhancements to the program Design Manager's Aide for Intelligent Decomposition (DeMAID). DeMAID is a knowledge-based tool used to aid a design manager in understanding the interactions among the tasks of a complex design problem. This is done by ordering the tasks to minimize feedback, determining the participating subsystems, and displaying them in an easily understood format. The two new enhancements include (1) rules for ordering a complex assembly process and (2) rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
Projection decomposition algorithm for dual-energy computed tomography via deep neural network.
Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei
2018-03-15
Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Si, Jiwei; Li, Hongxia; Sun, Yan; Xu, Yanli; Sun, Yu
2016-01-01
The present study used the choice/no-choice method to investigate the effect of math anxiety on the strategy used in computational estimation and mental arithmetic tasks and to examine age-related differences in this regard. Fifty-seven fourth graders, 56 sixth graders, and 60 adults were randomly selected to participate in the experiment. Results showed the following: (1) High-anxious individuals were more likely to use a rounding-down strategy in the computational estimation task under the best-choice condition. Additionally, sixth-grade students and adults performed faster than fourth-grade students on the strategy execution parameter. Math anxiety affected response times (RTs) and the accuracy with which strategies were executed. (2) The execution of the partial-decomposition strategy was superior to that of the full-decomposition strategy on the mental arithmetic task. Low-math-anxious persons provided more accurate answers than did high-math-anxious participants under the no-choice condition. This difference was significant for sixth graders. With regard to the strategy selection parameter, the RTs for strategy selection varied with age. PMID:27803685
Si, Jiwei; Li, Hongxia; Sun, Yan; Xu, Yanli; Sun, Yu
2016-01-01
The present study used the choice/no-choice method to investigate the effect of math anxiety on the strategy used in computational estimation and mental arithmetic tasks and to examine age-related differences in this regard. Fifty-seven fourth graders, 56 sixth graders, and 60 adults were randomly selected to participate in the experiment. Results showed the following: (1) High-anxious individuals were more likely to use a rounding-down strategy in the computational estimation task under the best-choice condition. Additionally, sixth-grade students and adults performed faster than fourth-grade students on the strategy execution parameter. Math anxiety affected response times (RTs) and the accuracy with which strategies were executed. (2) The execution of the partial-decomposition strategy was superior to that of the full-decomposition strategy on the mental arithmetic task. Low-math-anxious persons provided more accurate answers than did high-math-anxious participants under the no-choice condition. This difference was significant for sixth graders. With regard to the strategy selection parameter, the RTs for strategy selection varied with age.
Ordering Design Tasks Based on Coupling Strengths
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Bloebaum, C. L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Ordering design tasks based on coupling strengths
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Bloebaum, Christina L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Horizontal decomposition of data table for finding one reduct
NASA Astrophysics Data System (ADS)
Hońko, Piotr
2018-04-01
Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.
Developing Battery Computer Aided Engineering Tools for Military Vehicles
2013-12-01
Task 1.b Modeling Bullet penetration. The purpose of Task 1.a was to extend the chemical kinetics models of CoO2 cathodes developed under CAEBAT to...lithium- ion batteries. The new finite element model captures swelling/shrinking in cathodes /anodes due to thermal expansion and lithium intercalation...Solid Electrolyte Interphase (SEI) layer decomposition 80 2 Anode — electrolyte 100 3 Cathode — electrolyte 130 4 Electrolyte decomposition 180
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
Program Helps Decompose Complicated Design Problems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1993-01-01
Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
Multitasking the three-dimensional transport code TORT on CRAY platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.
1996-04-01
The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
A characterization of the two-step reaction mechanism of phenol decomposition by a Fenton reaction
NASA Astrophysics Data System (ADS)
Valdés, Cristian; Alzate-Morales, Jans; Osorio, Edison; Villaseñor, Jorge; Navarro-Retamal, Carlos
2015-11-01
Phenol is one of the worst contaminants at date, and its degradation has been a crucial task over years. Here, the decomposition process of phenol, in a Fenton reaction, is described. Using scavengers, it was observed that decomposition of phenol was mainly influenced by production of hydroxyl radicals. Experimental and theoretical activation energies (Ea) for phenol oxidation intermediates were calculated. According to these Ea, phenol decomposition is a two-step reaction mechanism mediated predominantly by hydroxyl radicals, producing a decomposition yield order given as hydroquinone > catechol > resorcinol. Furthermore, traces of reaction derived acids were detected by HPLC and GS-MS.
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
The two faces of avoidance: Time-frequency correlates of motivational disposition in blood phobia.
Mennella, Rocco; Sarlo, Michela; Messerotti Benvenuti, Simone; Buodo, Giulia; Mento, Giovanni; Palomba, Daniela
2017-11-01
Contrary to other phobias, individuals with blood phobia do not show a clear-cut withdrawal disposition from the feared stimulus. The study of response inhibition provides insights into reduced action disposition in blood phobia. Twenty individuals with and 20 without blood phobia completed an emotional go/no-go task including phobia-related pictures, as well as phobia-unrelated unpleasant, neutral, and pleasant stimuli. Behavioral results did not indicate a phobia-specific reduced action disposition in the phobic group. Time-frequency decomposition of event-related EEG data showed a reduction of right prefrontal activity, as indexed by an increase in alpha power (200 ms), for no-go mutilation trials in the phobic group but not in controls. Moreover, theta power (300 ms) increased specifically for phobia-related pictures in individuals with, but not without, blood phobia, irrespective of go or no-go trial types. Passive avoidance of phobia-related stimuli subtended by the increased alpha in the right prefrontal cortex, associated with increased emotional salience indexed by theta synchronization, represents a possible neurophysiological correlate of the conflicting motivational response in blood phobia. Through the novel use of time-frequency decomposition in an emotional go/no-go task, the present study contributed to clarifying the neurophysiological correlates of the overlapping motivational tendencies in blood phobia. © 2017 Society for Psychophysiological Research.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Stable Scalp EEG Spatiospectral Patterns Across Paradigms Estimated by Group ICA.
Labounek, René; Bridwell, David A; Mareček, Radek; Lamoš, Martin; Mikl, Michal; Slavíček, Tomáš; Bednařík, Petr; Baštinec, Jaromír; Hluštík, Petr; Brázdil, Milan; Jan, Jiří
2018-01-01
Electroencephalography (EEG) oscillations reflect the superposition of different cortical sources with potentially different frequencies. Various blind source separation (BSS) approaches have been developed and implemented in order to decompose these oscillations, and a subset of approaches have been developed for decomposition of multi-subject data. Group independent component analysis (Group ICA) is one such approach, revealing spatiospectral maps at the group level with distinct frequency and spatial characteristics. The reproducibility of these distinct maps across subjects and paradigms is relatively unexplored domain, and the topic of the present study. To address this, we conducted separate group ICA decompositions of EEG spatiospectral patterns on data collected during three different paradigms or tasks (resting-state, semantic decision task and visual oddball task). K-means clustering analysis of back-reconstructed individual subject maps demonstrates that fourteen different independent spatiospectral maps are present across the different paradigms/tasks, i.e. they are generally stable.
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M; Graversen, Carina; Sørensen, Helge B D; Bastlund, Jesper F
2017-04-01
Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Cognitive task analysis and innovation of training: the case of structured troubleshooting.
Schaafstal, A; Schraagen, J M; van Berlo, M
2000-01-01
Troubleshooting is often a time-consuming and difficult activity. The question of how the training of novice technicians can be improved was the starting point of the research described in this article. A cognitive task analysis was carried out consisting of two preliminary observational studies on troubleshooting in naturalistic settings, combined with an interpretation of the data obtained in the context of the existing literature. On the basis of this cognitive task analysis, a new method for the training of troubleshooting was developed (structured troubleshooting), which combines a domain-independent strategy for troubleshooting with a context-dependent, multiple-level, functional decomposition of systems. This method has been systematically evaluated for its use in training. The results show that technicians trained in structured troubleshooting solve twice as many malfunctions, in less time, than those trained in the traditional way. Moreover, structured troubleshooting can be taught in less time than can traditional troubleshooting. Finally, technicians learn to troubleshoot in an explicit and uniform way. These advantages of structured troubleshooting ultimately lead to a reduction in training and troubleshooting costs.
Guastello, Stephen J; Gorin, Hillary; Huschen, Samuel; Peters, Natalie E; Fabisch, Megan; Poston, Kirsten
2012-10-01
It has become well established in laboratory experiments that switching tasks, perhaps due to interruptions at work, incur costs in response time to complete the next task. Conditions are also known that exaggerate or lessen the switching costs. Although switching costs can contribute to fatigue, task switching can also be an adaptive response to fatigue. The present study introduces a new research paradigm for studying the emergence of voluntary task switching regimes, self-organizing processes therein, and the possibly conflicting roles of switching costs and minimum entropy. Fifty-four undergraduates performed 7 different computer-based cognitive tasks producing sets of 49 responses under instructional conditions requiring task quotas or no quotas. The sequences of task choices were analyzed using orbital decomposition to extract pattern types and lengths, which were then classified and compared with regard to Shannon entropy, topological entropy, number of task switches involved, and overall performance. Results indicated that similar but different patterns were generated under the two instructional conditions, and better performance was associated with lower topological entropy. Both entropy metrics were associated with the amount of voluntary task switching. Future research should explore conditions affecting the trade-off between switching costs and entropy, levels of automaticity between task elements, and the role of voluntary switching regimes on fatigue.
Does Learning a Complex Task Have To Be Complex?: A Study in Learning Decomposition.
ERIC Educational Resources Information Center
Lee, Frank J.; Anderson, John R.
2001-01-01
Decomposed the learning in the Kanfer-Ackerman Air-Traffic Controller Task (P. Ackerman, 1988) down to learning at the keyboard level. Reanalyzed the Ackerman data to show that learning in this complex task reflects learning at the keystroke level. Conducted an eye-tracking experiment with 10 adults that showed that learning at the key stroke…
Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S
2015-01-01
Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reconstructing multi-mode networks from multivariate time series
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Yang, Yu-Xuan; Dang, Wei-Dong; Cai, Qing; Wang, Zhen; Marwan, Norbert; Boccaletti, Stefano; Kurths, Jürgen
2017-09-01
Unveiling the dynamics hidden in multivariate time series is a task of the utmost importance in a broad variety of areas in physics. We here propose a method that leads to the construction of a novel functional network, a multi-mode weighted graph combined with an empirical mode decomposition, and to the realization of multi-information fusion of multivariate time series. The method is illustrated in a couple of successful applications (a multi-phase flow and an epileptic electro-encephalogram), which demonstrate its powerfulness in revealing the dynamical behaviors underlying the transitions of different flow patterns, and enabling to differentiate brain states of seizure and non-seizure.
Research on air and missile defense task allocation based on extended contract net protocol
NASA Astrophysics Data System (ADS)
Zhang, Yunzhi; Wang, Gang
2017-10-01
Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.
Tzur, Gabriel; Berger, Andrea
2009-03-17
Theta rhythm has been connected to ERP components such as the error-related negativity (ERN) and the feedback-related negativity (FRN). The nature of this theta activity is still unclear, that is, whether it is related to error detection, conflict between responses or reinforcement learning processes. We examined slow (e.g., theta) and fast (e.g., gamma) brain rhythms related to rule violation. A time-frequency decomposition analysis on a wide range of frequencies band (0-95 Hz) indicated that the theta activity relates to evaluation processes, regardless of motor/action processes. Similarities between the theta activities found in rule-violation tasks and in tasks eliciting ERN/FRN suggest that this theta activity reflects the operation of general evaluation mechanisms. Moreover, significant effects were found also in fast brain rhythms. These effects might be related to the synchronization between different types of cognitive processes involving the fulfillment of a task (e.g., working memory, visual perception, mathematical calculation, etc.).
Task analysis of autonomous on-road driving
NASA Astrophysics Data System (ADS)
Barbera, Anthony J.; Horst, John A.; Schlenoff, Craig I.; Aha, David W.
2004-12-01
The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).
ERIC Educational Resources Information Center
Xu, Chang; LeFevre, Jo-Anne
2016-01-01
Are there differential benefits of training sequential number knowledge versus spatial skills for children's numerical and spatial performance? Three- to five-year-old children (N = 84) participated in 1 session of either sequential training (e.g., what comes before and after the number 5?) or non-numerical spatial training (i.e., decomposition of…
[Detection of constitutional types of EEG using the orthogonal decomposition method].
Kuznetsova, S M; Kudritskaia, O V
1987-01-01
The authors present an algorithm of investigation into the processes of brain bioelectrical activity with the help of an orthogonal decomposition device intended for the identification of constitutional types of EEGs. The method has helped to effectively solve the task of the diagnosis of constitutional types of EEGs, which are determined by a variable degree of hereditary predisposition for longevity or cerebral stroke.
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
Catching a Ball at the Right Time and Place: Individual Factors Matter
Cesqui, Benedetta; d'Avella, Andrea; Portone, Alessandro; Lacquaniti, Francesco
2012-01-01
Intercepting a moving object requires accurate spatio-temporal control. Several studies have investigated how the CNS copes with such a challenging task, focusing on the nature of the information used to extract target motion parameters and on the identification of general control strategies. In the present study we provide evidence that the right time and place of the collision is not univocally specified by the CNS for a given target motion; instead, different but equally successful solutions can be adopted by different subjects when task constraints are loose. We characterized arm kinematics of fourteen subjects and performed a detailed analysis on a subset of six subjects who showed comparable success rates when asked to catch a flying ball in three dimensional space. Balls were projected by an actuated launching apparatus in order to obtain different arrival flight time and height conditions. Inter-individual variability was observed in several kinematic parameters, such as wrist trajectory, wrist velocity profile, timing and spatial distribution of the impact point, upper limb posture, trunk motion, and submovement decomposition. Individual idiosyncratic behaviors were consistent across different ball flight time conditions and across two experimental sessions carried out at one year distance. These results highlight the importance of a systematic characterization of individual factors in the study of interceptive tasks. PMID:22384072
Cascaded systems analysis of noise and detectability in dual-energy cone-beam CT
Gang, Grace J.; Zbijewski, Wojciech; Webster Stayman, J.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: Dual-energy computed tomography and dual-energy cone-beam computed tomography (DE-CBCT) are promising modalities for applications ranging from vascular to breast, renal, hepatic, and musculoskeletal imaging. Accordingly, the optimization of imaging techniques for such applications would benefit significantly from a general theoretical description of image quality that properly incorporates factors of acquisition, reconstruction, and tissue decomposition in DE tomography. This work reports a cascaded systems analysis model that includes the Poisson statistics of x rays (quantum noise), detector model (flat-panel detectors), anatomical background, image reconstruction (filtered backprojection), DE decomposition (weighted subtraction), and simple observer models to yield a task-based framework for DE technique optimization. Methods: The theoretical framework extends previous modeling of DE projection radiography and CBCT. Signal and noise transfer characteristics are propagated through physical and mathematical stages of image formation and reconstruction. Dual-energy decomposition was modeled according to weighted subtraction of low- and high-energy images to yield the 3D DE noise-power spectrum (NPS) and noise-equivalent quanta (NEQ), which, in combination with observer models and the imaging task, yields the dual-energy detectability index (d′). Model calculations were validated with NPS and NEQ measurements from an experimental imaging bench simulating the geometry of a dedicated musculoskeletal extremities scanner. Imaging techniques, including kVp pair and dose allocation, were optimized using d′ as an objective function for three example imaging tasks: (1) kidney stone discrimination; (2) iodine vs bone in a uniform, soft-tissue background; and (3) soft tissue tumor detection on power-law anatomical background. Results: Theoretical calculations of DE NPS and NEQ demonstrated good agreement with experimental measurements over a broad range of imaging conditions. Optimization results suggest a lower fraction of total dose imparted by the low-energy acquisition, a finding consistent with previous literature. The selection of optimal kVp pair reveals the combined effect of both quantum noise and contrast in the kidney stone discrimination and soft-tissue tumor detection tasks, whereas the K-edge effect of iodine was the dominant factor in determining kVp pairs in the iodine vs bone task. The soft-tissue tumor task illustrated the benefit of dual-energy imaging in eliminating anatomical background noise and improving detectability beyond that achievable by single-energy scans. Conclusions: This work established a task-based theoretical framework that is predictive of DE image quality. The model can be utilized in optimizing a broad range of parameters in image acquisition, reconstruction, and decomposition, providing a useful tool for maximizing DE-CBCT image quality and reducing dose. PMID:22894440
How to Develop an Engineering Design Task
ERIC Educational Resources Information Center
Dankenbring, Chelsey; Capobianco, Brenda M.; Eichinger, David
2014-01-01
In this article, the authors provide an overview of engineering and the engineering design process, and describe the steps they took to develop a fifth grade-level, standards-based engineering design task titled "Getting the Dirt on Decomposition." Their main goal was to focus more on modeling the discrete steps they took to create and…
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Anderson, Abraham R; Poole, Ben; Qin, Yulin
2011-12-01
Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model's predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
NASA Astrophysics Data System (ADS)
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.
2017-04-01
Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
Adjustments differ among low-threshold motor units during intermittent, isometric contractions.
Farina, Dario; Holobar, Ales; Gazzoni, Marco; Zazula, Damjan; Merletti, Roberto; Enoka, Roger M
2009-01-01
We investigated the changes in muscle fiber conduction velocity, recruitment and derecruitment thresholds, and discharge rate of low-threshold motor units during a series of ramp contractions. The aim was to compare the adjustments in motor unit activity relative to the duration that each motor unit was active during the task. Multichannel surface electromyographic (EMG) signals were recorded from the abductor pollicis brevis muscle of eight healthy men during 12-s contractions (n = 25) in which the force increased and decreased linearly from 0 to 10% of the maximum. The maximal force exhibited a modest decline (8.5 +/- 9.3%; P < 0.05) at the end of the task. The discharge times of 73 motor units that were active for 16-98% of the time during the first five contractions were identified throughout the task by decomposition of the EMG signals. Action potential conduction velocity decreased during the task by a greater amount for motor units that were initially active for >70% of the time compared with that of less active motor units. Moreover, recruitment and derecruitment thresholds increased for these most active motor units, whereas the thresholds decreased for the less active motor units. Another 18 motor units were recruited at an average of 171 +/- 32 s after the beginning of the task. The recruitment and derecruitment thresholds of these units decreased during the task, but muscle fiber conduction velocity did not change. These results indicate that low-threshold motor units exhibit individual adjustments in muscle fiber conduction velocity and motor neuron activation that depended on the relative duration of activity during intermittent contractions.
Information-theoretic decomposition of embodied and situated systems.
Da Rold, Federico
2018-07-01
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Real-Time Inhibitor Recession Measurements in the Space Shuttle Reusable Solid Rocket Motors
NASA Technical Reports Server (NTRS)
McWhorter, Bruce B.; Ewing, Mark E.; McCool, Alex (Technical Monitor)
2001-01-01
Real-time char line recession measurements were made on propellant inhibitors of the Space Shuttle Reusable Solid Rocket Motor (RSRM). The RSRM FSM-8 static test motor propellant inhibitors (composed of a rubber insulation material) were successfully instrumented with eroding potentiometers and thermocouples. The data was used to establish inhibitor recession versus time relationships. Normally, pre-fire and post-fire insulation thickness measurements establish the thermal performance of an ablating insulation material. However, post-fire inhibitor decomposition and recession measurements are complicated by the fact that most of the inhibitor is back during motor operation. It is therefore a difficult task to evaluate the thermal protection offered by the inhibitor material. Real-time measurements would help this task. The instrumentation program for this static test motor marks the first time that real-time inhibitors. This report presents that data for the center and aft field joint forward facing inhibitors. The data was primarily used to measure char line recession of the forward face of the inhibitors which provides inhibitor thickness reduction versus time data. The data was also used to estimate the inhibitor height versus time relationship during motor operation.
An Approach to Operational Analysis: Doctrinal Task Decomposition
2016-08-04
Once the unit is selected , CATS will output all of the doctrinal collective tasks associated with the unit. Currently, CATS outputs this information...Army unit are controlled data items, but for explanation purposes consider this simple example using a restaurant as the unit of interest. Table 1...shows an example Task Model for a restaurant using language and format similar to what CATS provides. Only 3 levels are shown in the example, but
Gil, Yolanda; Michel, Felix; Ratnakar, Varun; Read, Jordan S.; Hauder, Matheus; Duffy, Christopher; Hanson, Paul C.; Dugan, Hilary
2015-01-01
The Web was originally developed to support collaboration in science. Although scientists benefit from many forms of collaboration on the Web (e.g., blogs, wikis, forums, code sharing, etc.), most collaborative projects are coordinated over email, phone calls, and in-person meetings. Our goal is to develop a collaborative infrastructure for scientists to work on complex science questions that require multi-disciplinary contributions to gather and analyze data, that cannot occur without significant coordination to synthesize findings, and that grow organically to accommodate new contributors as needed as the work evolves over time. Our approach is to develop an organic data science framework based on a task-centered organization of the collaboration, includes principles from social sciences for successful on-line communities, and exposes an open science process. Our approach is implemented as an extension of a semantic wiki platform, and captures formal representations of task decomposition structures, relations between tasks and users, and other properties of tasks, data, and other relevant science objects. All these entities are captured through the semantic wiki user interface, represented as semantic web objects, and exported as linked data.
The Components of Working Memory Updating: An Experimental Decomposition and Individual Differences
ERIC Educational Resources Information Center
Ecker, Ullrich K. H.; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E. H.
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major…
Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.
Quispe-Soncco, Raisa
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630
Anderson, John R.; Bothell, Daniel; Fincham, Jon M.; Anderson, Abraham R.; Poole, Ben; Qin, Yulin
2013-01-01
Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model’s predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits. PMID:21557648
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.
NASA Astrophysics Data System (ADS)
Roverso, Davide
2003-08-01
Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Integrating impairments in reaction time and executive function using a diffusion model framework
Karalunas, Sarah L.; Huang-Pollock, Cynthia L.
2013-01-01
Using Ratcliff’s diffusion model and ex-Gaussian decomposition, we directly evaluate the role individual differences in reaction time (RT) distribution components play in the prediction of inhibitory control and working memory (WM) capacity in children with and without ADHD. Children with (n=92, x̄ age= 10.2 years, 67% male) and without ADHD (n=62, x̄ age=10.6 years, 46% male) completed four tasks of WM and a stop signal reaction time (SSRT) task. Children with ADHD had smaller WM capacities and less efficient inhibitory control. Diffusion model analyses revealed that children with ADHD had slower drift rates (v) and faster non-decision times (Ter), but there were no group differences in boundary separations (a). Similarly, using an ex-Gaussian approach, children with ADHD had larger τ values than non-ADHD controls, but did not differ in µ or σ distribution components. Drift rate mediated the association between ADHD status and performance on both inhibitory control and WM capacity. τ also mediated the ADHD-executive function impairment associations; however, models were a poorer fit to the data. Impaired performance on RT and executive functioning tasks has long been associated with childhood ADHD. Both are believed to be important cognitive mechanisms to the disorder. We demonstrate here that drift rate, or the speed at which information accumulates towards a decision, is able to explain both. PMID:23334775
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Task planning with uncertainty for robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Cao, Tiehua
1993-01-01
In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Morphological learning in a novel language: A cross-language comparison.
Havas, Viktória; Waris, Otto; Vaquero, Lucía; Rodríguez-Fornells, Antoni; Laine, Matti
2015-01-01
Being able to extract and interpret the internal structure of complex word forms such as the English word dance+r+s is crucial for successful language learning. We examined whether the ability to extract morphological information during word learning is affected by the morphological features of one's native tongue. Spanish and Finnish adult participants performed a word-picture associative learning task in an artificial language where the target words included a suffix marking the gender of the corresponding animate object. The short exposure phase was followed by a word recognition task and a generalization task for the suffix. The participants' native tongues vary greatly in terms of morphological structure, leading to two opposing hypotheses. On the one hand, Spanish speakers may be more effective in identifying gender in a novel language because this feature is present in Spanish but not in Finnish. On the other hand, Finnish speakers may have an advantage as the abundance of bound morphemes in their language calls for continuous morphological decomposition. The results support the latter alternative, suggesting that lifelong experience on morphological decomposition provides an advantage in novel morphological learning.
Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi
2018-02-01
Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.
Dang, Nhan C; Dreger, Zbigniew A; Gupta, Yogendra M; Hooks, Daniel E
2010-11-04
Plate impact experiments on the (210), (100), and (111) planes were performed to examine the role of crystalline anisotropy on the shock-induced decomposition of cyclotrimethylenetrinitramine (RDX) crystals. Time-resolved emission spectroscopy was used to probe the decomposition of single crystals shocked to peak stresses ranging between 7 and 20 GPa. Emission produced by decomposition intermediates was analyzed in terms of induction time to emission, emission intensity, and the emission spectra shapes as a function of stress and time. Utilizing these features, we found that the shock-induced decomposition of RDX crystals exhibits considerable anisotropy. Crystals shocked on the (210) and (100) planes were more sensitive to decomposition than crystals shocked on the (111) plane. The possible sources of the observed anisotropy are discussed with regard to the inelastic deformation mechanisms of shocked RDX. Our results suggest that, despite the anisotropy observed for shock initiation, decomposition pathways for all three orientations are similar.
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
DOT National Transportation Integrated Search
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Reactivity continuum modeling of leaf, root, and wood decomposition across biomes
NASA Astrophysics Data System (ADS)
Koehler, Birgit; Tranvik, Lars J.
2015-07-01
Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
Multiple multicontrol unitary operations: Implementation and applications
NASA Astrophysics Data System (ADS)
Lin, Qing
2018-04-01
The efficient implementation of computational tasks is critical to quantum computations. In quantum circuits, multicontrol unitary operations are important components. Here, we present an extremely efficient and direct approach to multiple multicontrol unitary operations without decomposition to CNOT and single-photon gates. With the proposed approach, the necessary two-photon operations could be reduced from O( n 3) with the traditional decomposition approach to O( n), which will greatly relax the requirements and make large-scale quantum computation feasible. Moreover, we propose the potential application to the ( n- k)-uniform hypergraph state.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less
Distributed Task Offloading in Heterogeneous Vehicular Crowd Sensing
Liu, Yazhi; Wang, Wendong; Ma, Yuekun; Yang, Zhigang; Yu, Fuxing
2016-01-01
The ability of road vehicles to efficiently execute different sensing tasks varies because of the heterogeneity in their sensing ability and trajectories. Therefore, the data collection sensing task, which requires tempo-spatial sensing data, becomes a serious problem in vehicular sensing systems, particularly those with limited sensing capabilities. A utility-based sensing task decomposition and offloading algorithm is proposed in this paper. The utility function for a task executed by a certain vehicle is built according to the mobility traces and sensing interfaces of the vehicle, as well as the sensing data type and tempo-spatial coverage requirements of the sensing task. Then, the sensing tasks are decomposed and offloaded to neighboring vehicles according to the utilities of the neighboring vehicles to the decomposed sensing tasks. Real trace-driven simulation shows that the proposed task offloading is able to collect much more comprehensive and uniformly distributed sensing data than other algorithms. PMID:27428967
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Rotational-path decomposition based recursive planning for spacecraft attitude reorientation
NASA Astrophysics Data System (ADS)
Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2018-02-01
The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.
Cortical subnetwork dynamics during human language tasks.
Collard, Maxwell J; Fifer, Matthew S; Benz, Heather L; McMullen, David P; Wang, Yujing; Milsap, Griffin W; Korzeniewska, Anna; Crone, Nathan E
2016-07-15
Language tasks require the coordinated activation of multiple subnetworks-groups of related cortical interactions involved in specific components of task processing. Although electrocorticography (ECoG) has sufficient temporal and spatial resolution to capture the dynamics of event-related interactions between cortical sites, it is difficult to decompose these complex spatiotemporal patterns into functionally discrete subnetworks without explicit knowledge of each subnetwork's timing. We hypothesized that subnetworks corresponding to distinct components of task-related processing could be identified as groups of interactions with co-varying strengths. In this study, five subjects implanted with ECoG grids over language areas performed word repetition and picture naming. We estimated the interaction strength between each pair of electrodes during each task using a time-varying dynamic Bayesian network (tvDBN) model constructed from the power of high gamma (70-110Hz) activity, a surrogate for population firing rates. We then reduced the dimensionality of this model using principal component analysis (PCA) to identify groups of interactions with co-varying strengths, which we term functional network components (FNCs). This data-driven technique estimates both the weight of each interaction's contribution to a particular subnetwork, and the temporal profile of each subnetwork's activation during the task. We found FNCs with temporal and anatomical features consistent with articulatory preparation in both tasks, and with auditory and visual processing in the word repetition and picture naming tasks, respectively. These FNCs were highly consistent between subjects with similar electrode placement, and were robust enough to be characterized in single trials. Furthermore, the interaction patterns uncovered by FNC analysis correlated well with recent literature suggesting important functional-anatomical distinctions between processing external and self-produced speech. Our results demonstrate that subnetwork decomposition of event-related cortical interactions is a powerful paradigm for interpreting the rich dynamics of large-scale, distributed cortical networks during human cognitive tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Task Decomposition Model for Dispatchers in Dynamic Scheduling of Demand Responsive Transit Systems
DOT National Transportation Integrated Search
2000-06-01
Since the passage of ADA, the demand for paratransit service is steadily increasing. Paratransit companies are relying on computer automation to streamline dispatch operations, increase productivity and reduce operator stress and error. Little resear...
Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications
2013-01-01
Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109
Integrating impairments in reaction time and executive function using a diffusion model framework.
Karalunas, Sarah L; Huang-Pollock, Cynthia L
2013-07-01
Using Ratcliff's diffusion model and ex-Gaussian decomposition, we directly evaluate the role individual differences in reaction time (RT) distribution components play in the prediction of inhibitory control and working memory (WM) capacity in children with and without ADHD. Children with (n = 91, [Formula: see text] age = 10.2 years, 67 % male) and without ADHD (n = 62, [Formula: see text] age = 10.6 years, 46 % male) completed four tasks of WM and a stop signal reaction time (SSRT) task. Children with ADHD had smaller WM capacities and less efficient inhibitory control. Diffusion model analyses revealed that children with ADHD had slower drift rates (v) and faster non-decision times (Ter), but there were no group differences in boundary separations (a). Similarly, using an ex-Gaussian approach, children with ADHD had larger τ values than non-ADHD controls, but did not differ in μ or σ distribution components. Drift rate mediated the association between ADHD status and performance on both inhibitory control and WM capacity. τ also mediated the ADHD-executive function impairment associations; however, models were a poorer fit to the data. Impaired performance on RT and executive functioning tasks has long been associated with childhood ADHD. Both are believed to be important cognitive mechanisms to the disorder. We demonstrate here that drift rate, or the speed at which information accumulates towards a decision, is able to explain both.
Berns, G S; Song, A W; Mao, H
1999-07-15
Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.
Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412
Decomposition odour profiling in the air and soil surrounding vertebrate carrion.
Forbes, Shari L; Perrault, Katelynn A
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
Diagnosis of multiple sclerosis from EEG signals using nonlinear methods.
Torabi, Ali; Daliri, Mohammad Reza; Sabzposhan, Seyyed Hojjat
2017-12-01
EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies have been hypothesized to be the building blocks used by the central nervous system to generate movement. According to this hypothesis, the accomplishment of various motor tasks relies on the ability of the motor system to recruit a small set of synergies on a single-trial basis and combine them in a task-dependent manner. It is conceivable that this requires a fine tuning of the trial-to-trial relationships between the synergy activations. Here we develop an analytical methodology to address the nature and functional role of trial-to-trial correlations between synergy activations, which is designed to help to better understand how these correlations may contribute to generating appropriate motor behavior. The algorithm we propose first divides correlations between muscle synergies into types (noise correlations, quantifying the trial-to-trial covariations of synergy activations at fixed task, and signal correlations, quantifying the similarity of task tuning of the trial-averaged activation coefficients of different synergies), and then uses single-trial methods (task-decoding and information theory) to quantify their overall effect on the task-discriminating information carried by muscle synergy activations. We apply the method to both synchronous and time-varying synergies and exemplify it on electromyographic data recorded during performance of reaching movements in different directions. Our method reveals the robust presence of information-enhancing patterns of signal and noise correlations among pairs of synchronous synergies, and shows that they enhance by 9-15% (depending on the set of tasks) the task-discriminating information provided by the synergy decompositions. We suggest that the proposed methodology could be useful for assessing whether single-trial activations of one synergy depend on activations of other synergies and quantifying the effect of such dependences on the task-to-task differences in muscle activation patterns.
Adamopoulou, Theodora; Papadaki, Maria I; Kounalakis, Manolis; Vazquez-Carreto, Victor; Pineda-Solano, Alba; Wang, Qingsheng; Mannan, M Sam
2013-06-15
Thermal decomposition of hydroxylamine, NH2OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130-150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30-80 ml solutions containing 1.4-20 g of pure hydroxylamine (2.8-40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3-5 kJ g(-1). The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate. Copyright © 2013 Elsevier B.V. All rights reserved.
MEG masked priming evidence for form-based decomposition of irregular verbs
Fruchter, Joseph; Stockall, Linnaea; Marantz, Alec
2013-01-01
To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH) and pseudo-affixed words (corner-CORN), but not for orthographic controls (brothel-BROTH). Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words) modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words). Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars). Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI) defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright and Hayes (2003) on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole forms. PMID:24319420
How does spatial extent of fMRI datasets affect independent component analysis decomposition?
Aragri, Adriana; Scarabino, Tommaso; Seifritz, Erich; Comani, Silvia; Cirillo, Sossio; Tedeschi, Gioacchino; Esposito, Fabrizio; Di Salle, Francesco
2006-09-01
Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity. (c) 2006 Wiley-Liss, Inc.
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F
2017-03-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these alternatives, the performance of participants on transparent (foolish), quasi-transparent (bookish), opaque (vanish), and orthographic control words (bucket) was examined in a series of 5 experiments. In Experiments 1-3 variants of a masked priming lexical-decision task were used; Experiment 4 used a masked priming semantic decision task, and Experiment 5 used a single-word (nonpriming) semantic decision task with a color-boundary manipulation. In addition to the behavioral data, event-related potential (ERP) data were collected in Experiments 1, 2, 4, and 5. Across all experiments, we observed a graded effect of semantic transparency in behavioral and ERP data, with the largest effect for semantically transparent words, the next largest for quasi-transparent words, and the smallest for opaque words. The results are discussed in terms of decomposition versus PDP approaches to morphological processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cai, Suxian; Yang, Shanshan; Zheng, Fang; Lu, Meng; Wu, Yunfeng; Krishnan, Sridhar
2013-01-01
Analysis of knee joint vibration (VAG) signals can provide quantitative indices for detection of knee joint pathology at an early stage. In addition to the statistical features developed in the related previous studies, we extracted two separable features, that is, the number of atoms derived from the wavelet matching pursuit decomposition and the number of significant signal turns detected with the fixed threshold in the time domain. To perform a better classification over the data set of 89 VAG signals, we applied a novel classifier fusion system based on the dynamic weighted fusion (DWF) method to ameliorate the classification performance. For comparison, a single leastsquares support vector machine (LS-SVM) and the Bagging ensemble were used for the classification task as well. The results in terms of overall accuracy in percentage and area under the receiver operating characteristic curve obtained with the DWF-based classifier fusion method reached 88.76% and 0.9515, respectively, which demonstrated the effectiveness and superiority of the DWF method with two distinct features for the VAG signal analysis. PMID:23573175
The neural basis of novelty and appropriateness in processing of creative chunk decomposition.
Huang, Furong; Fan, Jin; Luo, Jing
2015-06-01
Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. Copyright © 2015 Elsevier Inc. All rights reserved.
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
A Conceptual Framework for Adaptive Project Management in the Department of Defense
2016-04-30
schedule work) established a core set of principles that went unchallenged until the start of the 21st century. This belief that managing...detailed planning, task decomposition and assignment of hours at the start of a project as unnecessary, often wasted effort that sacrifices accuracy...with the illusion of precision. Work, at the task level, is best assigned by the team performing the work as close as possible to the actual start
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Merchant, Hugo; Honing, Henkjan
2013-01-01
We propose a decomposition of the neurocognitive mechanisms that might underlie interval-based timing and rhythmic entrainment. Next to reviewing the concepts central to the definition of rhythmic entrainment, we discuss recent studies that suggest rhythmic entrainment to be specific to humans and a selected group of bird species, but, surprisingly, is not obvious in non-human primates. On the basis of these studies we propose the gradual audiomotor evolution hypothesis that suggests that humans fully share interval-based timing with other primates, but only partially share the ability of rhythmic entrainment (or beat-based timing). This hypothesis accommodates the fact that non-human primates (i.e., macaques) performance is comparable to humans in single interval tasks (such as interval reproduction, categorization, and interception), but show differences in multiple interval tasks (such as rhythmic entrainment, synchronization, and continuation). Furthermore, it is in line with the observation that macaques can, apparently, synchronize in the visual domain, but show less sensitivity in the auditory domain. And finally, while macaques are sensitive to interval-based timing and rhythmic grouping, the absence of a strong coupling between the auditory and motor system of non-human primates might be the reason why macaques cannot rhythmically entrain in the way humans do.
Mind the Gap: Bridging economic and naturalistic risk-taking with cognitive neuroscience
Schonberg, Tom; Fox, Craig R.; Poldrack, Russell A.
2010-01-01
Economists define risk in terms of variability of possible outcomes whereas clinicians and laypeople generally view risk as exposure to possible loss or harm. Neuroeconomic studies using relatively simple behavioral tasks have identified a network of brain regions that respond to economic risk, but these studies have had limited success predicting naturalistic risk-taking. In contrast, more complex behavioral tasks developed by clinicians (e.g., Balloon Analogue Risk Task and Iowa Gambling Task) correlate with naturalistic risk-taking but resist decomposition into distinct cognitive constructs. We propose that to bridge this gap and better understand neural substrates of naturalistic risk-taking, new tasks are needed that: (1) are decomposable into basic cognitive/economic constructs; (2) predict naturalistic risk-taking; and (3) engender dynamic, affective engagement. PMID:21130018
The spinodal decomposition in 17-4PH stainless steel subjected to long-term aging at 350 deg. C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jun; Zou Hong; Li Cong
2008-05-15
The influence of aging time on the microstructure evolution of 17-4 PH martensitic stainless steel was studied by transmission electron microscopy (TEM). Results showed that the martensite decomposed by a spinodal decomposition mechanism after the alloy was subjected to long-term aging at 350 deg. C. The fine scale spinodal decomposition of {alpha}-ferrite brought about a Cr-enriched bright stripe and a Fe-enriched dark stripe, i.e., {alpha}' and {alpha} phases, separately, which were perpendicular to the grain boundary. The spinodal decomposition started at the grain boundary. Then with prolonged aging time, the decomposition microstructure expanded from the grain boundary to interior. Themore » wavelength of the spinodally decomposed microstructure changed little with extended aging time.« less
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran
1996-01-01
This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.
Backward assembly planning with DFA analysis
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1995-01-01
An assembly planning system that operates based on a recursive decomposition of assembly into subassemblies, and analyzes assembly cost in terms of stability, directionality, and manipulability to guide the generation of preferred assembly plans is presented. The planning in this system incorporates the special processes, such as cleaning, testing, labeling, etc. that must occur during the assembly, and handles nonreversible as well as reversible assembly tasks through backward assembly planning. In order to increase the planning efficiency, the system avoids the analysis of decompositions that do not correspond to feasible assembly tasks. This is achieved by grouping and merging those parts that can not be decomposable at the current stage of backward assembly planning due to the requirement of special processes and the constraint of interconnection feasibility. The invention includes methods of evaluating assembly cost in terms of the number of fixtures (or holding devices) and reorientations required for assembly, through the analysis of stability, directionality, and manipulability. All these factors are used in defining cost and heuristic functions for an AO* search for an optimal plan.
Wang, Jinjia; Liu, Yuan
2015-04-01
This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Feng, Wenting; Liang, Junyi; Hale, Lauren E.; ...
2017-06-09
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wenting; Liang, Junyi; Hale, Lauren E.
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
Feng, Wenting; Liang, Junyi; Hale, Lauren E; Jung, Chang Gyo; Chen, Ji; Zhou, Jizhong; Xu, Minggang; Yuan, Mengting; Wu, Liyou; Bracho, Rosvel; Pegoraro, Elaine; Schuur, Edward A G; Luo, Yiqi
2017-11-01
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon-climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming. Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change. © 2017 John Wiley & Sons Ltd.
A leakage-free resonance sparse decomposition technique for bearing fault detection in gearboxes
NASA Astrophysics Data System (ADS)
Osman, Shazali; Wang, Wilson
2018-03-01
Most of rotating machinery deficiencies are related to defects in rolling element bearings. Reliable bearing fault detection still remains a challenging task, especially for bearings in gearboxes as bearing-defect-related features are nonstationary and modulated by gear mesh vibration. A new leakage-free resonance sparse decomposition (LRSD) technique is proposed in this paper for early bearing fault detection of gearboxes. In the proposed LRSD technique, a leakage-free filter is suggested to remove strong gear mesh and shaft running signatures. A kurtosis and cosine distance measure is suggested to select appropriate redundancy r and quality factor Q. The signal residual is processed by signal sparse decomposition for highpass and lowpass resonance analysis to extract representative features for bearing fault detection. The effectiveness of the proposed technique is verified by a succession of experimental tests corresponding to different gearbox and bearing conditions.
Matching multiple rigid domain decompositions of proteins
Flynn, Emily; Streinu, Ileana
2017-01-01
We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528
Neural correlates of true and false memory in mild cognitive impairment.
Sweeney-Reed, Catherine M; Riddell, Patricia M; Ellis, Judi A; Freeman, Jayne E; Nasuto, Slawomir J
2012-01-01
The goal of this research was to investigate the changes in neural processing in mild cognitive impairment. We measured phase synchrony, amplitudes, and event-related potentials in veridical and false memory to determine whether these differed in participants with mild cognitive impairment compared with typical, age-matched controls. Empirical mode decomposition phase locking analysis was used to assess synchrony, which is the first time this analysis technique has been applied in a complex cognitive task such as memory processing. The technique allowed assessment of changes in frontal and parietal cortex connectivity over time during a memory task, without a priori selection of frequency ranges, which has been shown previously to influence synchrony detection. Phase synchrony differed significantly in its timing and degree between participant groups in the theta and alpha frequency ranges. Timing differences suggested greater dependence on gist memory in the presence of mild cognitive impairment. The group with mild cognitive impairment had significantly more frontal theta phase locking than the controls in the absence of a significant behavioural difference in the task, providing new evidence for compensatory processing in the former group. Both groups showed greater frontal phase locking during false than true memory, suggesting increased searching when no actual memory trace was found. Significant inter-group differences in frontal alpha phase locking provided support for a role for lower and upper alpha oscillations in memory processing. Finally, fronto-parietal interaction was significantly reduced in the group with mild cognitive impairment, supporting the notion that mild cognitive impairment could represent an early stage in Alzheimer's disease, which has been described as a 'disconnection syndrome'.
Bilingual Reading of Compound Words
ERIC Educational Resources Information Center
Ko, In Yeong; Wang, Min; Kim, Say Young
2011-01-01
The present study investigated whether bilingual readers activate constituents of compound words in one language while processing compound words in the other language via decomposition. Two experiments using a lexical decision task were conducted with adult Korean-English bilingual readers. In Experiment 1, the lexical decision of real English…
Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.
2014-01-01
Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652
A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine.
Duan, Mingxing; Li, Kenli; Liao, Xiangke; Li, Keqin
2018-06-01
As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs. In this paper, an efficient ELM based on the Spark framework (SELM), which includes three parallel subalgorithms, is proposed for big data classification. By partitioning the corresponding data sets reasonably, the hidden layer output matrix calculation algorithm, matrix decomposition algorithm, and matrix decomposition algorithm perform most of the computations locally. At the same time, they retain the intermediate results in distributed memory and cache the diagonal matrix as broadcast variables instead of several copies for each task to reduce a large amount of the costs, and these actions strengthen the learning ability of the SELM. Finally, we implement our SELM algorithm to classify large data sets. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, our SELM achieves an speedup on a cluster with ten nodes, and reaches a speedup with 15 nodes, an speedup with 20 nodes, a speedup with 25 nodes, a speedup with 30 nodes, and a speedup with 35 nodes.
Catalysts for the decomposition of hydrazine and its derivatives and a method for its production
NASA Technical Reports Server (NTRS)
Sasse, R.
1986-01-01
Catalysts of various types are used to decompose hydrazine and its derivatives. One type of catalyst is made as follows: the aluminum is dissolved out of an alloy of cobalt or nickel/aluminum so that a structure is produced that is chemically active for the monergol and that has a large active surface. The objective was to avoid difficulties and to create a catalyst that not only has a short start time but that can also be manufactured easily and relatively inexpensively. The solution to this task is to coat the base structure of the catalyst with oxides of copper, cobalt and cerium or oxides of copper, cobalt and cerite earth.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
Graph Frequency Analysis of Brain Signals
Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro
2016-01-01
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
Mind the gap: bridging economic and naturalistic risk-taking with cognitive neuroscience.
Schonberg, Tom; Fox, Craig R; Poldrack, Russell A
2011-01-01
Economists define risk in terms of the variability of possible outcomes, whereas clinicians and laypeople generally view risk as exposure to possible loss or harm. Neuroeconomic studies using relatively simple behavioral tasks have identified a network of brain regions that respond to economic risk, but these studies have had limited success predicting naturalistic risk-taking. By contrast, more complex behavioral tasks developed by clinicians (e.g. Balloon Analogue Risk Task and Iowa Gambling Task) correlate with naturalistic risk-taking but resist decomposition into distinct cognitive constructs. We propose here that to bridge this gap and better understand neural substrates of naturalistic risk-taking, new tasks are needed that: are decomposable into basic cognitive and/or economic constructs; predict naturalistic risk-taking; and engender dynamic, affective engagement. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Structured Model for Software Documentation.
ERIC Educational Resources Information Center
Swigger, Keith
The concept of "structured programming" was developed to facilitate software production, but it has not carried over to documentation design. Two concepts of structure are relevant to user documentation for computer programs. The first is based on programming techniques that emphasize decomposition of tasks into discrete modules, while the second…
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia
Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.
2015-01-01
This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553
Repeated decompositions reveal the stability of infomax decomposition of fMRI data
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2010-01-01
In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve
2004-01-01
The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.
Profiling the decomposition odour at the grave surface before and after probing.
Forbes, S L; Troobnikoff, A N; Ueland, M; Nizio, K D; Perrault, K A
2016-02-01
Human remains detection (HRD) dogs are recognised as a valuable and non-invasive search method for remains concealed in many different environments, including clandestine graves. However, the search for buried remains can be a challenging task as minimal odour may be available at the grave surface for detection by the dogs. Handlers often use a soil probe during these searches in an attempt to increase the amount of odour available for detection, but soil probing is considered an invasive search technique. The aim of this study was to determine whether the soil probe assists with increasing the abundance of volatile organic compounds (VOCs) available at the grave surface. A proof-of-concept method was developed using porcine remains to collect VOCs within the grave without disturbing the burial environment, and to compare their abundance at the grave surface before and after probing. Detection and identification of the VOC profiles required the use of comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS) due to its superior sensitivity and selectivity for decomposition odour profiling. The abundance of decomposition VOCs was consistently higher within the grave environment compared to the grave surface, except when the grave surface had been disturbed, confirming the reduced availability of odour at the grave surface. Although probing appeared to increase the abundance of VOCs at the grave surface on many of the sampling days, there were no clear trends identified across the study and no direct relationships with the environmental variables measured. Typically, the decomposition VOCs that were most prevalent in the grave soil were the same VOCs detected at the grave surface, whereas the trace VOCs detected in these environments varied throughout the post-burial period. This study highlighted that probing the soil can assist with releasing decomposition VOCs but is likely correlated to environmental and burial variables which require further study. The use of a soil probe to assist HRD dogs should not be disregarded but should only follow the use of non-invasive methods if deemed appropriate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Human Guidance Behavior Decomposition and Modeling
NASA Astrophysics Data System (ADS)
Feit, Andrew James
Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.
The formal verification of generic interpreters
NASA Technical Reports Server (NTRS)
Windley, P.; Levitt, K.; Cohen, G. C.
1991-01-01
The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.
2010-01-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer’s disease risk under a verbal paired associates task. We found a indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, that was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. PMID:20227508
Caffo, Brian S; Crainiceanu, Ciprian M; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H; Bassett, Susan Spear; Pekar, James J
2010-07-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer's disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer's disease risk under a verbal paired associates task. We found an indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, which was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
The effect of body size on the rate of decomposition in a temperate region of South Africa.
Sutherland, A; Myburgh, J; Steyn, M; Becker, P J
2013-09-10
Forensic anthropologists rely on the state of decomposition of a body to estimate the post-mortem-interval (PMI) which provides information about the natural events and environmental forces that could have affected the remains after death. Various factors are known to influence the rate of decomposition, among them temperature, rainfall and exposure of the body. However, conflicting reports appear in the literature on the effect of body size on the rate of decay. The aim of this project was to compare decomposition rates of large pigs (Sus scrofa; 60-90 kg), with that of small pigs (<35 kg), to assess the influence of body size on decomposition rates. For the decomposition rates of small pigs, 15 piglets were assessed three times per week over a period of three months during spring and early summer. Data collection was conducted until complete skeletonization occurred. Stages of decomposition were scored according to separate categories for each anatomical region, and the point values for each region were added to determine the total body score (TBS), which represents the overall stage of decomposition for each pig. For the large pigs, data of 15 pigs were used. Scatter plots illustrating the relationships between TBS and PMI as well as TBS and accumulated degree days (ADD) were used to assess the pattern of decomposition and to compare decomposition rates between small and large pigs. Results indicated that rapid decomposition occurs during the early stages of decomposition for both samples. Large pigs showed a plateau phase in the course of advanced stages of decomposition, during which decomposition was minimal. A similar, but much shorter plateau was reached by small pigs of >20 kg at a PMI of 20-25 days, after which decomposition commenced swiftly. This was in contrast to the small pigs of <20 kg, which showed no plateau phase and their decomposition rates were swift throughout the duration of the study. Overall, small pigs decomposed 2.82 times faster than large pigs, indicating that body size does have an effect on the rate of decomposition. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
1993-06-01
completes the functional decomposition of the detection and monitoring requirements of the Counterdrug JTF. David Marca in his text SADT, Structural...September 1992. 12. Marca , D. McGowan, C., SADT, Structured Analysis and Design Technique, Mc Graw-Hill , 1988. 13. United States Department of
Evidence for Early Morphological Decomposition in Visual Word Recognition
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2010-01-01
We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…
"Wh-on-Earth" in Chinese Speakers' L2 English: Evidence of Dormant Features
ERIC Educational Resources Information Center
Yuan, Boping
2014-01-01
Adopting a decompositional approach to items in the lexicon, this article reports on an empirical study investigating Chinese speakers' second language (L2) acquisition of English "wh-on-earth" questions (i.e. questions with phrases like what on earth or "who on earth"). An acceptability judgment task, a discourse-completion…
Backward assembly planning with DFA analysis
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1992-01-01
An assembly planning system that operates based on a recursive decomposition of assembly into subassemblies is presented. The planning system analyzes assembly cost in terms of stability, directionality, and manipulability to guide the generation of preferred assembly plans. The planning in this system incorporates the special processes, such as cleaning, testing, labeling, etc., that must occur during the assembly. Additionally, the planning handles nonreversible, as well as reversible, assembly tasks through backward assembly planning. In order to decrease the planning efficiency, the system avoids the analysis of decompositions that do not correspond to feasible assembly tasks. This is achieved by grouping and merging those parts that can not be decomposable at the current stage of backward assembly planning due to the requirement of special processes and the constraint of interconnection feasibility. The invention includes methods of evaluating assembly cost in terms of the number of fixtures (or holding devices) and reorientations required for assembly, through the analysis of stability, directionality, and manipulability. All these factors are used in defining cost and heuristic functions for an AO* search for an optimal plan.
Neural Correlates of True and False Memory in Mild Cognitive Impairment
Sweeney-Reed, Catherine M.; Riddell, Patricia M.; Ellis, Judi A.; Freeman, Jayne E.; Nasuto, Slawomir J.
2012-01-01
The goal of this research was to investigate the changes in neural processing in mild cognitive impairment. We measured phase synchrony, amplitudes, and event-related potentials in veridical and false memory to determine whether these differed in participants with mild cognitive impairment compared with typical, age-matched controls. Empirical mode decomposition phase locking analysis was used to assess synchrony, which is the first time this analysis technique has been applied in a complex cognitive task such as memory processing. The technique allowed assessment of changes in frontal and parietal cortex connectivity over time during a memory task, without a priori selection of frequency ranges, which has been shown previously to influence synchrony detection. Phase synchrony differed significantly in its timing and degree between participant groups in the theta and alpha frequency ranges. Timing differences suggested greater dependence on gist memory in the presence of mild cognitive impairment. The group with mild cognitive impairment had significantly more frontal theta phase locking than the controls in the absence of a significant behavioural difference in the task, providing new evidence for compensatory processing in the former group. Both groups showed greater frontal phase locking during false than true memory, suggesting increased searching when no actual memory trace was found. Significant inter-group differences in frontal alpha phase locking provided support for a role for lower and upper alpha oscillations in memory processing. Finally, fronto-parietal interaction was significantly reduced in the group with mild cognitive impairment, supporting the notion that mild cognitive impairment could represent an early stage in Alzheimer’s disease, which has been described as a ‘disconnection syndrome’. PMID:23118992
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
AFRRI Reports October - December 1990
1991-01-01
in the reaction between cytosine radicals and adria- mycin, it is possible that the yield of-DMPO--O,- and of its decomposition product, DMPO-OH, are...mixture due to the decomposition Time (min) of DMPO-O- by 0,7 ’. Fig. 2. Adriamycin radical yield as a function of time. y.lrradiated The electron...radical by decomposition of superoxide spin trapped toionization of thyminc. The thymnine cation and union radicals. adducts, Ato. Pharmn. 21: 262-265
Comparison of decomposition rates between autopsied and non-autopsied human remains.
Bates, Lennon N; Wescott, Daniel J
2016-04-01
Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas and that penetrating trauma may not be a significant factor in the overall rate of decomposition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Vatansever, Deniz; Bzdok, Danilo; Wang, Hao-Ting; Mollo, Giovanna; Sormaz, Mladen; Murphy, Charlotte; Karapanagiotidis, Theodoros; Smallwood, Jonathan; Jefferies, Elizabeth
2017-09-01
Contemporary theories assume that semantic cognition emerges from a neural architecture in which different component processes are combined to produce aspects of conceptual thought and behaviour. In addition to the state-level, momentary variation in brain connectivity, individuals may also differ in their propensity to generate particular configurations of such components, and these trait-level differences may relate to individual differences in semantic cognition. We tested this view by exploring how variation in intrinsic brain functional connectivity between semantic nodes in fMRI was related to performance on a battery of semantic tasks in 154 healthy participants. Through simultaneous decomposition of brain functional connectivity and semantic task performance, we identified distinct components of semantic cognition at rest. In a subsequent validation step, these data-driven components demonstrated explanatory power for neural responses in an fMRI-based semantic localiser task and variation in self-generated thoughts during the resting-state scan. Our findings showed that good performance on harder semantic tasks was associated with relative segregation at rest between frontal brain regions implicated in controlled semantic retrieval and the default mode network. Poor performance on easier tasks was linked to greater coupling between the same frontal regions and the anterior temporal lobe; a pattern associated with deliberate, verbal thematic thoughts at rest. We also identified components that related to qualities of semantic cognition: relatively good performance on pictorial semantic tasks was associated with greater separation of angular gyrus from frontal control sites and greater integration with posterior cingulate and anterior temporal cortex. In contrast, good speech production was linked to the separation of angular gyrus, posterior cingulate and temporal lobe regions. Together these data show that quantitative and qualitative variation in semantic cognition across individuals emerges from variations in the interaction of nodes within distinct functional brain networks. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Chuan; Peng, Juan; Liang, Ming
2014-01-01
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements. PMID:24686730
An intelligent decomposition approach for efficient design of non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1992-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.
Li, Chuan; Peng, Juan; Liang, Ming
2014-03-28
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
Outline for a theory of intelligence
NASA Technical Reports Server (NTRS)
Albus, James S.
1991-01-01
Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: (1) control bandwidth decreases about an order of magnitude at each higher level, (2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, (3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and (4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.
Size-controlled magnetic nanoparticles with lecithin for biomedical applications
NASA Astrophysics Data System (ADS)
Park, S. I.; Kim, J. H.; Kim, C. G.; Kim, C. O.
2007-05-01
Lecithin-adsorbed magnetic nanoparticles were prepared by three-step process that the thermal decomposition was combined with ultrasonication. Experimental parameters were three items—molar ratio between Fe(CO) 5 and oleic acid, keeping time at decomposition temperature and lecithin concentration. As the molar ratio between Fe(CO) 5 and oleic acid, and keeping time at decomposition temperature increased, the particle size increased. However, the change of lecithin concentration did not show the remarkable particle size variation.
Pini, Giovanni; Brutschy, Arne; Scheidler, Alexander; Dorigo, Marco; Birattari, Mauro
2014-01-01
We study task partitioning in the context of swarm robotics. Task partitioning is the decomposition of a task into subtasks that can be tackled by different workers. We focus on the case in which a task is partitioned into a sequence of subtasks that must be executed in a certain order. This implies that the subtasks must interface with each other, and that the output of a subtask is used as input for the subtask that follows. A distinction can be made between task partitioning with direct transfer and with indirect transfer. We focus our study on the first case: The output of a subtask is directly transferred from an individual working on that subtask to an individual working on the subtask that follows. As a test bed for our study, we use a swarm of robots performing foraging. The robots have to harvest objects from a source, situated in an unknown location, and transport them to a home location. When a robot finds the source, it memorizes its position and uses dead reckoning to return there. Dead reckoning is appealing in robotics, since it is a cheap localization method and it does not require any additional external infrastructure. However, dead reckoning leads to errors that grow in time if not corrected periodically. We compare a foraging strategy that does not make use of task partitioning with one that does. We show that cooperation through task partitioning can be used to limit the effect of dead reckoning errors. This results in improved capability of locating the object source and in increased performance of the swarm. We use the implemented system as a test bed to study benefits and costs of task partitioning with direct transfer. We implement the system with real robots, demonstrating the feasibility of our approach in a foraging scenario.
Dynamic correlations at different time-scales with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2018-07-01
We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.
Decomposition into Multiple Morphemes during Lexical Access: A Masked Priming Study of Russian Nouns
ERIC Educational Resources Information Center
Kazanina, Nina; Dukova-Zheleva, Galina; Geber, Dana; Kharlamov, Viktor; Tonciulescu, Keren
2008-01-01
The study reports the results of a masked priming experiment with morphologically complex Russian nouns. Participants performed a lexical decision task to a visual target that differed from its prime in one consonant. Three conditions were included: (1) "transparent," in which the prime was morphologically related to the target and contained the…
ERIC Educational Resources Information Center
Cheng, Chenxi; Wang, Min; Perfetti, Charles A.
2011-01-01
This study investigated compound processing and cross-language activation in a group of Chinese-English bilingual children, and they were divided into four groups based on the language proficiency levels in their two languages. A lexical decision task was designed using compound words in both languages. The compound words in one language contained…
The Training of Morphological Decomposition in Word Processing and Its Effects on Literacy Skills.
Bar-Kochva, Irit; Hasselhorn, Marcus
2017-01-01
This study set out to examine the effects of a morpheme-based training on reading and spelling in fifth and sixth graders ( N = 47), who present poor literacy skills and speak German as a second language. A computerized training, consisting of a visual lexical decision task (comprising 2,880 items, presented in 12 sessions), was designed to encourage fast morphological analysis in word processing. The children were divided between two groups: the one underwent a morpheme-based training, in which word-stems of inflections and derivations were presented for a limited duration, while their pre- and suffixes remained on screen until response. Another group received a control training consisting of the same task, except that the duration of presentation of a non-morphological unit was restricted. In a Word Disruption Task, participants read words under three conditions: morphological separation (with symbols separating between the words' morphemes), non-morphological separation (with symbols separating between non-morphological units of words), and no-separation (with symbols presented at the beginning and end of each word). The group receiving the morpheme-based program improved more than the control group in terms of word reading fluency in the morphological condition. The former group also presented similar word reading fluency after training in the morphological condition and in the no-separation condition, thereby suggesting that the morpheme-based training contributed to the integration of morphological decomposition into the process of word recognition. At the same time, both groups similarly improved in other measures of word reading fluency. With regard to spelling, the morpheme-based training group showed a larger improvement than the control group in spelling of trained items, and a unique improvement in spelling of untrained items (untrained word-stems integrated into trained pre- and suffixes). The results further suggest some contribution of the morpheme-based training to performance in a standardized spelling task. The morpheme-based training did not, however, show any unique effect on comprehension. These results suggest that the morpheme-based training is effective in enhancing some basic literacy skill in the population examined, i.e., morphological analysis in word processing and the access to orthographic representations in spelling, with no specific effects on reading fluency and comprehension.
The Training of Morphological Decomposition in Word Processing and Its Effects on Literacy Skills
Bar-Kochva, Irit; Hasselhorn, Marcus
2017-01-01
This study set out to examine the effects of a morpheme-based training on reading and spelling in fifth and sixth graders (N = 47), who present poor literacy skills and speak German as a second language. A computerized training, consisting of a visual lexical decision task (comprising 2,880 items, presented in 12 sessions), was designed to encourage fast morphological analysis in word processing. The children were divided between two groups: the one underwent a morpheme-based training, in which word-stems of inflections and derivations were presented for a limited duration, while their pre- and suffixes remained on screen until response. Another group received a control training consisting of the same task, except that the duration of presentation of a non-morphological unit was restricted. In a Word Disruption Task, participants read words under three conditions: morphological separation (with symbols separating between the words’ morphemes), non-morphological separation (with symbols separating between non-morphological units of words), and no-separation (with symbols presented at the beginning and end of each word). The group receiving the morpheme-based program improved more than the control group in terms of word reading fluency in the morphological condition. The former group also presented similar word reading fluency after training in the morphological condition and in the no-separation condition, thereby suggesting that the morpheme-based training contributed to the integration of morphological decomposition into the process of word recognition. At the same time, both groups similarly improved in other measures of word reading fluency. With regard to spelling, the morpheme-based training group showed a larger improvement than the control group in spelling of trained items, and a unique improvement in spelling of untrained items (untrained word-stems integrated into trained pre- and suffixes). The results further suggest some contribution of the morpheme-based training to performance in a standardized spelling task. The morpheme-based training did not, however, show any unique effect on comprehension. These results suggest that the morpheme-based training is effective in enhancing some basic literacy skill in the population examined, i.e., morphological analysis in word processing and the access to orthographic representations in spelling, with no specific effects on reading fluency and comprehension. PMID:29163245
Spectroscopic study of shock-induced decomposition in ammonium perchlorate single crystals.
Gruzdkov, Y A; Winey, J M; Gupta, Y M
2008-05-01
Time-resolved Raman scattering measurements were performed on ammonium perchlorate (AP) single crystals under stepwise shock loading. For particular temperature and pressure conditions, the intensity of the Raman spectra in shocked AP decayed exponentially with time. This decay is attributed to shock-induced chemical decomposition in AP. A series of shock experiments, reaching peak stresses from 10-18 GPa, demonstrated that higher stresses inhibit decomposition while higher temperatures promote it. No orientation dependence was found when AP crystals were shocked normal to the (210) and (001) crystallographic planes. VISAR (velocity interferometer system for any reflector) particle velocity measurements and time-resolved optical extinction measurements carried out to verify these observations are consistent with the Raman data. The combined kinetic and spectroscopic results are consistent with a proton-transfer reaction as the first decomposition step in shocked AP.
Photodegradation at day, microbial decomposition at night - decomposition in arid lands
NASA Astrophysics Data System (ADS)
Gliksman, Daniel; Gruenzweig, Jose
2014-05-01
Our current knowledge of decomposition in dry seasons and its role in carbon turnover is fragmentary. So far, decomposition during dry seasons was mostly attributed to abiotic mechanisms, mainly photochemical and thermal degradation, while the contribution of microorganisms to the decay process was excluded. We asked whether microbial decomposition occurs during the dry season and explored its interaction with photochemical degradation under Mediterranean climate. We conducted a litter bag experiment with local plant litter and manipulated litter exposure to radiation using radiation filters. We found notable rates of CO2 fluxes from litter which were related to microbial activity mainly during night-time throughout the dry season. This activity was correlated with litter moisture content and high levels of air humidity and dew. Day-time CO2 fluxes were related to solar radiation, and radiation manipulation suggested photodegradation as the underlying mechanism. In addition, a decline in microbial activity was followed by a reduction in photodegradation-related CO2 fluxes. The levels of microbial decomposition and photodegradation in the dry season were likely the factors influencing carbon mineralization during the subsequent wet season. This study showed that microbial decomposition can be a dominant contributor to CO2 emissions and mass loss in the dry season and it suggests a regulating effect of microbial activity on photodegradation. Microbial decomposition is an important contributor to the dry season decomposition and impacts the annual litter turn-over rates in dry regions. Global warming may lead to reduced moisture availability and dew deposition, which may greatly influence not only microbial decomposition of plant litter, but also photodegradation.
NASA Technical Reports Server (NTRS)
Nashman, Marilyn; Chaconas, Karen J.
1988-01-01
The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.
2016-01-01
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461
Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D
2016-10-04
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.
Does oxygen exposure time control the extent of organic matter decomposition in peatlands?
NASA Astrophysics Data System (ADS)
Philben, Michael; Kaiser, Karl; Benner, Ronald
2014-05-01
The extent of peat decomposition was investigated in four cores collected along a latitudinal gradient from 56°N to 66°N in the West Siberian Lowland. The acid:aldehyde ratios of lignin phenols were significantly higher in the two northern cores compared with the two southern cores, indicating peats at the northern sites were more highly decomposed. Yields of hydroxyproline, an amino acid found in plant structural glycoproteins, were also significantly higher in northern cores compared with southern cores. Hydroxyproline-rich glycoproteins are not synthesized by microbes and are generally less reactive than bulk plant carbon, so elevated yields indicated that northern cores were more extensively decomposed than the southern cores. The southern cores experienced warmer temperatures, but were less decomposed, indicating that temperature was not the primary control of peat decomposition. The plant community oscillated between Sphagnum and vascular plant dominance in the southern cores, but vegetation type did not appear to affect the extent of decomposition. Oxygen exposure time appeared to be the strongest control of the extent of peat decomposition. The northern cores had lower accumulation rates and drier conditions, so these peats were exposed to oxic conditions for a longer time before burial in the catotelm, where anoxic conditions prevail and rates of decomposition are generally lower by an order of magnitude.
NASA Technical Reports Server (NTRS)
Mejzak, R. S.
1980-01-01
The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.
FACETS: multi-faceted functional decomposition of protein interaction networks.
Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes
2012-10-15
The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein-protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Supplementary data are available at the Bioinformatics online. Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/~assourav/Facets/
Cockle, Diane Lyn; Bell, Lynne S
2017-03-01
Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Emissions of volatile organic compounds during the decomposition of plant litter
NASA Astrophysics Data System (ADS)
Gray, Christopher M.; Monson, Russell K.; Fierer, Noah
2010-09-01
Volatile organic compounds (VOCs) are emitted during plant litter decomposition, and such VOCs can have wide-ranging impacts on atmospheric chemistry, terrestrial biogeochemistry, and soil ecology. However, we currently have a limited understanding of the relative importance of biotic versus abiotic sources of these VOCs and whether distinct types of litter emit different types and quantities of VOCs during decomposition. We analyzed VOCs emitted by microbes or by abiotic mechanisms during the decomposition of litter from 12 plant species in a laboratory experiment using proton transfer reaction mass spectrometry (PTR-MS). Net emissions from litter with active microbial populations (non-sterile litters) were between 0 and 11 times higher than emissions from sterile controls over a 20-d incubation period, suggesting that abiotic sources of VOCs are generally less important than biotic sources. In all cases, the sterile and non-sterile litter treatments emitted different types of VOCs, with methanol being the dominant VOC emitted from litters during microbial decomposition, accounting for 78 to 99% of the net emissions. We also found that the types of VOCs released during biotic decomposition differed in a predictable manner among litter types with VOC profiles also changing as decomposition progressed over time. These results show the importance of incorporating both the biotic decomposition of litter and the species-dependent differences in terrestrial vegetation into global VOC emission models.
Reschechtko, Sasha; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
Manipulating objects with the hands requires the accurate production of resultant forces including shear forces; effective control of these shear forces also requires the production of internal forces normal to the surface of the object(s) being manipulated. In the present study, we investigated multi-finger synergies stabilizing shear and normal components of force, as well as drifts in both components of force, during isometric pressing tasks requiring a specific magnitude of shear force production. We hypothesized that shear and normal forces would evolve similarly in time, and also show similar stability properties as assessed by the decomposition of inter-trial variance within the uncontrolled manifold hypothesis. Healthy subjects were required to accurately produce total shear and total normal forces with four fingers of the hand during a steady-state force task (with and without visual feedback) and a self-paced force pulse task. The two force components showed similar time profiles during both shear force pulse production and unintentional drift induced by turning the visual feedback off. Only the explicitly instructed components of force, however, were stabilized with multi-finger synergies. No force-stabilizing synergies and no anticipatory synergy adjustments were seen for the normal force in shear force production trials. These unexpected qualitative differences in the control of the two force components – which are produced by some of the same muscles and show high degree of temporal coupling – are interpreted within the theory of control with referent coordinates for salient variables. These observations suggest the existence of two classes of neural variables: one that translates into shifts of referent coordinates and defines changes in magnitude of salient variables, and the other controlling gains in back-coupling loops that define stability of the salient variables. Only the former are shared between the explicit and implicit task components. PMID:27601252
[Effects of tree species fine root decomposition on soil active organic carbon].
Liu, Yan; Wang, Si-Long; Wang, Xiao-Wei; Yu, Xiao-Jun; Yang, Yue-Jun
2007-03-01
With incubation test, this paper studied the effects of fine root decomposition of Alnus cremastogyne, Cunninghamia lanceolata and Michelia macclurei on the content of soil active organic carbon at 9 degrees C , 14 degrees C , 24 degrees C and 28 degrees C. The results showed that the decomposition rate of fine root differed significantly with test tree species, which was decreased in the order of M. macclurei > A. cremastogyne > C. lanceolata. The decomposition rate was increased with increasing temperature, but declined with prolonged incubation time. Fine root source, incubation temperature, and incubation time all affected the contents of soil microbial biomass carbon and water-soluble organic carbon. The decomposition of fine root increased soil microbial biomass carbon and water-soluble organic carbon significantly, and the effect decreased in the order of M. macclurei > A. cremastogyne > C. lanceolata. Higher contents of soil microbial biomass carbon and water-soluble organic carbon were observed at medium temperature and middle incubation stage. Fine root decomposition had less effect on the content of soil readily oxidized organic carbon.
Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko
2015-01-01
We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605
Examining impairment of adaptive compensation for stabilizing motor repetitions in stroke survivors.
Kim, Yushin; Koh, Kyung; Yoon, BumChul; Kim, Woo-Sub; Shin, Joon-Ho; Park, Hyung-Soon; Shim, Jae Kun
2017-12-01
The hand, one of the most versatile but mechanically redundant parts of the human body, suffers more and longer than other body parts after stroke. One of the rehabilitation paradigms, task-oriented rehabilitation, encourages motor repeatability, the ability to produce similar motor performance over repetitions through compensatory strategies while taking advantage of the motor system's redundancy. The previous studies showed that stroke survivors inconsistently performed a given motor task with limited motor solutions. We hypothesized that stroke survivors would exhibit deficits in motor repeatability and adaptive compensation compared to healthy controls in during repetitive force-pulse (RFP) production tasks using multiple fingers. Seventeen hemiparetic stroke survivors and seven healthy controls were asked to repeatedly press force sensors as fast as possible using the four fingers of each hand. The hierarchical variability decomposition model was employed to compute motor repeatability and adaptive compensation across finger-force impulses, respectively. Stroke survivors showed decreased repeatability and adaptive compensation of force impulses between individual fingers as compared to the control (p < 0.05). The stroke survivors also showed decreased pulse frequency and greater peak-to-peak time variance than the control (p < 0.05). Force-related variables, such as mean peak force and peak force interval variability, demonstrated no significant difference between groups. Our findings indicate that stroke-induced brain injury negatively affects their ability to exploit their redundant or abundant motor system in an RFP task.
Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres
2017-08-01
The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhong, X. Y.; Gao, J. X.; Ren, H.; Cai, W. G.
2018-04-01
The acceleration of the urbanization process has brought new opportunities for China’s development. With the rapid economic development and people’s living standards improving, building energy consumption also showed a rigid growth trend. With the continuous improvement of the level of industrialization, industrial energy-saving potential declines. The construction industry to bear the task of energy-saving emission reduction will face more severe challenges. As the three municipalities of China, Beijing, Shanghai and Chongqing have significant radiation effects in the economy, urbanization level and construction industry development of the region. Therefore, it is of great significance to study the building energy consumption in the three regions with the change of urbanization level and the key factors. Based on the data of Beijing, Shanghai and Chongqing from 2001 to 2015, this paper attempts to find out whether the EKC curve of building energy consumption exists. At the same time, based on the results of the model, the data of the three regions are divided into three periods. The exponential decomposition method (LMDI) is used to find out the factors that have the greatest impact on the energy consumption of buildings in different stages. Moreover, analyzes the policy background of each stage and puts forward some policy suggestions on this basis.
Information Processing Research
1992-01-03
structure of instances. Opal provides special graphical objects called "Ag- greGadgets" which are used to hold a collection of other objects (either...available in classes of expert systems tasks, re- late this to the structure of parallel production systems, and incorporate parallel-decomposition...Anantharaman et al. 88]. We designed a new pawn structure algorithm and upgraded the king-safety pattern recog- nizers, which contributed significantly
ERIC Educational Resources Information Center
Al Dahhan, Noor Z.; Kirby, John R.; Brien, Donald C.; Munoz, Douglas P.
2017-01-01
Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times…
The Work of Steering Instruction toward the Mathematical Point: A Decomposition of Teaching Practice
ERIC Educational Resources Information Center
Sleep, Laurie
2012-01-01
Despite its centrality in teaching, what it takes to identify the goals of instruction and use those goals to manage the work has yet to be articulated in ways that it can be adequately studied or taught. Using data from preservice teachers' mathematics lessons, this study identifies and illustrates seven central tasks of "steering…
Synthesis of a hybrid model of the VSC FACTS devices and HVDC technologies
NASA Astrophysics Data System (ADS)
Borovikov, Yu S.; Gusev, A. S.; Sulaymanov, A. O.; Ufa, R. A.
2014-10-01
The motivation of the presented research is based on the need for development of new methods and tools for adequate simulation of FACTS devices and HVDC systems as part of real electric power systems (EPS). The Research object: An alternative hybrid approach for synthesizing VSC-FACTS and -HVDC hybrid model is proposed. The results: the VSC- FACTS and -HVDC hybrid model is designed in accordance with the presented concepts of hybrid simulation. The developed model allows us to carry out adequate simulation in real time of all the processes in HVDC, FACTS devices and EPS as a whole without any decomposition and limitation on their duration, and also use the developed tool for effective solution of a design, operational and research tasks of EPS containing such devices.
Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.
Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun
2017-07-25
This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.
Ab initio Kinetics and Thermal Decomposition Mechanism of Mononitrobiuret and 1,5-Dinitrobiuret
2016-03-14
Journal Article 3. DATES COVERED (From - To) Feb 2015-May 2015 4. TITLE AND SUBTITLE Ab initio Kinetics and Thermal Decomposition Mechanism of 5a...tetrazole-free, nitrogen-rich, energetic compounds. For the first time, the thermal decomposition mechanisms of MNB and DNB have been investigated...potential energy surfaces for thermal decomposition of MNB and DNB were characterized at the RCCSD(T)/cc-pV∞Z//M06-2X/aug- cc-pVTZ level of theory
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Liu, Haizhou; Bruton, Thomas A; Doyle, Fiona M; Sedlak, David L
2014-09-02
Persulfate (S2O8(2-)) is being used increasingly for in situ chemical oxidation (ISCO) of organic contaminants in groundwater, despite an incomplete understanding of the mechanism through which it is converted into reactive species. In particular, the decomposition of persulfate by naturally occurring mineral surfaces has not been studied in detail. To gain insight into the reaction rates and mechanism of persulfate decomposition in the subsurface, and to identify possible approaches for improving its efficacy, the decomposition of persulfate was investigated in the presence of pure metal oxides, clays, and representative aquifer solids collected from field sites in the presence and absence of benzene. Under conditions typical of groundwater, Fe(III)- and Mn(IV)-oxides catalytically converted persulfate into sulfate radical (SO4(•-)) and hydroxyl radical (HO(•)) over time scales of several weeks at rates that were 2-20 times faster than those observed in metal-free systems. Amorphous ferrihydrite was the most reactive iron mineral with respect to persulfate decomposition, with reaction rates proportional to solid mass and surface area. As a result of radical chain reactions, the rate of persulfate decomposition increased by as much as 100 times when benzene concentrations exceeded 0.1 mM. Due to its relatively slow rate of decomposition in the subsurface, it can be advantageous to inject persulfate into groundwater, allowing it to migrate to zones of low hydraulic conductivity where clays, metal oxides, and contaminants will accelerate its conversion into reactive oxidants.
2015-01-01
Persulfate (S2O82–) is being used increasingly for in situ chemical oxidation (ISCO) of organic contaminants in groundwater, despite an incomplete understanding of the mechanism through which it is converted into reactive species. In particular, the decomposition of persulfate by naturally occurring mineral surfaces has not been studied in detail. To gain insight into the reaction rates and mechanism of persulfate decomposition in the subsurface, and to identify possible approaches for improving its efficacy, the decomposition of persulfate was investigated in the presence of pure metal oxides, clays, and representative aquifer solids collected from field sites in the presence and absence of benzene. Under conditions typical of groundwater, Fe(III)- and Mn(IV)-oxides catalytically converted persulfate into sulfate radical (SO4•–) and hydroxyl radical (HO•) over time scales of several weeks at rates that were 2–20 times faster than those observed in metal-free systems. Amorphous ferrihydrite was the most reactive iron mineral with respect to persulfate decomposition, with reaction rates proportional to solid mass and surface area. As a result of radical chain reactions, the rate of persulfate decomposition increased by as much as 100 times when benzene concentrations exceeded 0.1 mM. Due to its relatively slow rate of decomposition in the subsurface, it can be advantageous to inject persulfate into groundwater, allowing it to migrate to zones of low hydraulic conductivity where clays, metal oxides, and contaminants will accelerate its conversion into reactive oxidants. PMID:25133603
Theoretical studies of the decomposition mechanisms of 1,2,4-butanetriol trinitrate.
Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo
2017-12-06
Density functional theory (DFT) and canonical variational transition-state theory combined with a small-curvature tunneling correction (CVT/SCT) were used to explore the decomposition mechanisms of 1,2,4-butanetriol trinitrate (BTTN) in detail. The results showed that the γ-H abstraction reaction is the initial pathway for autocatalytic BTTN decomposition. The three possible hydrogen atom abstraction reactions are all exothermic. The rate constants for autocatalytic BTTN decomposition are 3 to 10 40 times greater than the rate constants for the two unimolecular decomposition reactions (O-NO 2 cleavage and HONO elimination). The process of BTTN decomposition can be divided into two stages according to whether the NO 2 concentration is above a threshold value. HONO elimination is the main reaction channel during the first stage because autocatalytic decomposition requires NO 2 and the concentration of NO 2 is initially low. As the reaction proceeds, the concentration of NO 2 gradually increases; when it exceeds the threshold value, the second stage begins, with autocatalytic decomposition becoming the main reaction channel.
Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter
2016-09-21
In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach.
Seasonal variation of carcass decomposition and gravesoil chemistry in a cold (Dfa) climate.
Meyer, Jessica; Anderson, Brianna; Carter, David O
2013-09-01
It is well known that temperature significantly affects corpse decomposition. Yet relatively few taphonomy studies investigate the effects of seasonality on decomposition. Here, we propose the use of the Köppen-Geiger climate classification system and describe the decomposition of swine (Sus scrofa domesticus) carcasses during the summer and winter near Lincoln, Nebraska, USA. Decomposition was scored, and gravesoil chemistry (total carbon, total nitrogen, ninhydrin-reactive nitrogen, ammonium, nitrate, and soil pH) was assessed. Gross carcass decomposition in summer was three to seven times greater than in winter. Initial significant changes in gravesoil chemistry occurred following approximately 320 accumulated degree days, regardless of season. Furthermore, significant (p < 0.05) correlations were observed between ammonium and pH (positive correlation) and between nitrate and pH (negative correlation). We hope that future decomposition studies employ the Köppen-Geiger climate classification system to understand the seasonality of corpse decomposition, to validate taphonomic methods, and to facilitate cross-climate comparisons of carcass decomposition. © 2013 American Academy of Forensic Sciences.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series
NASA Astrophysics Data System (ADS)
Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.
2017-12-01
Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
Zhou, Xiaohong; Feng, Deyou; Wen, Chunzi; Liu, Dan
2018-03-29
In freshwater ecosystems, aquatic macrophytes play significant roles in nutrient cycling. One problem in this process is nutrient loss in the tissues of untimely harvested plants. In this study, we used two aquatic species, Nelumbo nucifera and Trapa bispinosa Roxb., to investigate the decomposition dynamics and nutrient release from detritus. Litter bags containing 10 g of stems (plus petioles) and leaves for each species detritus were incubated in the pond from November 2016 to May 2017. Nine times litterbags were retrieved on days 6, 14, 25, 45, 65, 90, 125, 145, and 165 after the decomposition experiment for the monitoring of biomass loss and nutrient release. The results suggested that the dry masses of N. nucifera and T. bispinosa decomposed by 49.35-69.40 and 82.65-91.65%, respectively. The order of decomposition rate constants (k) is as follows: leaves of T. bispinosa (0.0122 day -1 ) > stems (plus petioles) of T. bispinosa (0.0090 day -1 ) > leaves of N. nucifera (0.0060 day -1 ) > stems (plus petioles) of N. nucifera (0.0030 day -1 ). Additionally, the orders of time for 50% dry mass decay, time for 95% dry mass decay, and turnover rate are as follows: leaves < stems (plus petioles) and T. bispinosa < N. nucifera, respectively. This result indicated that the dry mass loss, k values, and other parameters related to k values are significantly different in species- and tissue-specific. The C, N, and P concentration and the C/N, C/P, and N/P ratios presented the irregular temporal changes trends during the whole decay period. In addition, nutrient accumulation index (AI) was significantly changed depending on the dry mass remaining and C, N, and P concentration in detritus at different decomposition times. The nutrient AIs were 36.72, 8.08, 6.35, and 2.56% for N; 31.25, 9.85, 4.00, and 1.63% for P; 25.15, 16.96, 7.36, and 6.16% for C in the stems (plus petioles) of N. nucifera, leaves of N. nucifera, stems (plus petioles) of T. bispinosa, and leaves of T. bispinosa, respectively, at the day 165. These results indicated that 63.28-97.44% of N, 68.75-98.37% of P, and 74.85-93.84% of C were released from the plant detritus to the water at the day 165 of the decomposition period. The initial detritus chemistry, particularly the P-related parameters (P concentration and C/P and N/P ratios), strongly affected dry mass loss, decomposition rates, and nutrient released from detritus into water. Two-way ANOVA results also confirm that the effects on the species were significant for decomposition dynamics (dry mass loss), nutrient release (nutrient concentration, their ratios, and nutrient AI) (P < 0.01), and expected N concentration (P > 0.05). In addition, the decomposition time had also significant effects on the detritus decomposition dynamic and nutrient release. However, the contributors of species and decomposition time on detritus decomposition were significantly different on the basis of their F values of two-way ANOVA results. This study can provide scientific bases for the aquatic plant scientific management in freshwater ecosystems of the East region of China.
A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification
NASA Astrophysics Data System (ADS)
He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue
2014-11-01
In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.
TARGET - TASK ANALYSIS REPORT GENERATION TOOL, VERSION 1.0
NASA Technical Reports Server (NTRS)
Ortiz, C. J.
1994-01-01
The Task Analysis Report Generation Tool, TARGET, is a graphical interface tool used to capture procedural knowledge and translate that knowledge into a hierarchical report. TARGET is based on VISTA, a knowledge acquisition tool developed by the Naval Systems Training Center. TARGET assists a programmer and/or task expert organize and understand the steps involved in accomplishing a task. The user can label individual steps in the task through a dialogue-box and get immediate graphical feedback for analysis. TARGET users can decompose tasks into basic action kernels or minimal steps to provide a clear picture of all basic actions needed to accomplish a job. This method allows the user to go back and critically examine the overall flow and makeup of the process. The user can switch between graphics (box flow diagrams) and text (task hierarchy) versions to more easily study the process being documented. As the practice of decomposition continues, tasks and their subtasks can be continually modified to more accurately reflect the user's procedures and rationale. This program is designed to help a programmer document an expert's task thus allowing the programmer to build an expert system which can help others perform the task. Flexibility is a key element of the system design and of the knowledge acquisition session. If the expert is not able to find time to work on the knowledge acquisition process with the program developer, the developer and subject matter expert may work in iterative sessions. TARGET is easy to use and is tailored to accommodate users ranging from the novice to the experienced expert systems builder. TARGET is written in C-language for IBM PC series and compatible computers running MS-DOS and Microsoft Windows version 3.0 or 3.1. No source code is supplied. The executable also requires 2Mb of RAM, a Microsoft compatible mouse, a VGA display and an 80286, 386 or 486 processor machine. The standard distribution medium for TARGET is one 5.25 inch 360K MS-DOS format diskette. TARGET was developed in 1991.
Surface fuel litterfall and decomposition in the northern Rocky Mountains, U.S.A.
Robert E. Keane
2008-01-01
Surface fuel deposition and decomposition rates are important to fire management and research because they can define the longevity of fuel treatments in time and space and they can be used to design, build, test, and validate complex fire and ecosystem models useful in evaluating management alternatives. We determined rates of surface fuel litterfall and decomposition...
Richard T. Conant; Michael Ryan; Goran I. Agren; Hannah E. Birge; Eric A. Davidson; Peter E. Eliasson; Sarah E. Evans; Serita D. Frey; Christian P. Giardina; Francesca M. Hopkins; Riitta Hyvonen; Miko U. F . Kirschbaum; Jocelyn M. Lavallee; Jens Leifeld; William J. Parton; Jessica Megan Steinweg; Matthew D. Wallenstein; J . A. Martin Wetterstedt; Mark A. Bradford
2011-01-01
The response of soil organic matter (OM) decomposition to increasing temperature is a critical aspect of ecosystem responses to global change. The impacts of climate warming on decomposition dynamics have not been resolved due to apparently contradictory results from field and lab experiments, most of which has focused on labile carbon with short turnover times. But...
Watterson, James H; Donohue, Joseph P
2011-09-01
Skeletal tissues (rat) were analyzed for ketamine (KET) and norketamine (NKET) following acute ketamine exposure (75 mg/kg i.p.) to examine the influence of bone type and decomposition period on drug levels. Following euthanasia, drug-free (n = 6) and drug-positive (n = 20) animals decomposed outdoors in rural Ontario for 0, 1, or 2 weeks. Skeletal remains were recovered and ground samples of various bones underwent passive methanolic extraction and analysis by GC-MS after solid-phase extraction. Drug levels, expressed as mass normalized response ratios, were compared across tissue types and decomposition periods. Bone type was a main effect (p < 0.05) for drug level and drug/metabolite level ratio (DMLR) for all decomposition times, except for DMLR after 2 weeks of decomposition. Mean drug level (KET and NKET) and DMLR varied by up to 23-fold, 18-fold, and 5-fold, respectively, between tissue types. Decomposition time was significantly related to DMLR, KET level, and NKET level in 3/7, 4/7, and 1/7 tissue types, respectively. Although substantial sitedependence may exist in measured bone drug levels, ratios of drug and metabolite levels should be investigated for utility in discrimination of drug administration patterns in forensic work.
Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A
2005-04-07
The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
Crowdsourcing Austrian data on decomposition with the help of citizen scientists
NASA Astrophysics Data System (ADS)
Sandén, Taru; Berthold, Helene; Schwarz, Michael; Baumgarten, Andreas; Spiegel, Heide
2017-04-01
Decay of organic material, decomposition, is a critical process for life on earth. Through decomposition, food becomes available for plants and soil organisms that they use in their growth and maintenance. When plant material decomposes, it loses weight and releases the greenhouse gas carbon dioxide (CO2) into the atmosphere. Terrestrial soils contain about three times more carbon than the atmosphere and, therefore, changes in the balance of soil carbon storage and release can significantly amplify or attenuate global warming. Many factors affecting the global carbon cycle are already known and mapped; however, an index for decomposition rate is still missing, even though it is needed for climate modelling. The Tea Bag Index (TBI) measures decomposition in a standardised, achievable, climate-relevant, and time-relevant way by burying commercial nylon tea bags in soils for three months (Keuskamp et al., 2013). In the summer of 2016, TBI (expressed as decomposition rate (k) and stabilisation index (S)) was measured with the help of Austrian citizen scientists at 7-8 cm soil depth in three different land uses (maize croplands, grasslands and forests). In total ca. 2700 tea bags were sent to the citizen scientists of which ca. 50% were returned. The data generated by the citizen scientists will be incorporated into an Austrian as well as a global soil map of decomposition. This map can be used as input to improve climate modelling in the future.
Guo, Feng; Cheng, Xin-lu; Zhang, Hong
2012-04-12
Which is the first step in the decomposition process of nitromethane is a controversial issue, proton dissociation or C-N bond scission. We applied reactive force field (ReaxFF) molecular dynamics to probe the initial decomposition mechanisms of nitromethane. By comparing the impact on (010) surfaces and without impact (only heating) for nitromethane simulations, we found that proton dissociation is the first step of the pyrolysis of nitromethane, and the C-N bond decomposes in the same time scale as in impact simulations, but in the nonimpact simulation, C-N bond dissociation takes place at a later time. At the end of these simulations, a large number of clusters are formed. By analyzing the trajectories, we discussed the role of the hydrogen bond in the initial process of nitromethane decompositions, the intermediates observed in the early time of the simulations, and the formation of clusters that consisted of C-N-C-N chain/ring structures.
The trait contribution to wood decomposition rates of 15 Neotropical tree species.
van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C
2010-12-01
The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.
Temperature, oxygen, and vegetation controls on decomposition in a James Bay peatland
NASA Astrophysics Data System (ADS)
Philben, Michael; Holmquist, James; MacDonald, Glen; Duan, Dandan; Kaiser, Karl; Benner, Ronald
2015-06-01
The biochemical composition of a peat core from James Bay Lowland, Canada, was used to assess the extent of peat decomposition and diagenetic alteration. Our goal was to identify environmental controls on peat decomposition, particularly its sensitivity to naturally occurring changes in temperature, oxygen exposure time, and vegetation. All three varied substantially during the last 7000 years, providing a natural experiment for evaluating their effects on decomposition. The bottom 50 cm of the core formed during the Holocene Climatic Optimum (~7000-4000 years B.P.), when mean annual air temperature was likely 1-2°C warmer than present. A reconstruction of the water table level using testate amoebae indicated oxygen exposure time was highest in the subsequent upper portion of the core between 150 and 225 cm depth (from ~2560 to 4210 years B.P.) and the plant community shifted from mostly Sphagnum to vascular plant dominance. Several independent biochemical indices indicated that decomposition was greatest in this interval. Hydrolysable amino acid yields, hydroxyproline yields, and acid:aldehyde ratios of syringyl lignin phenols were higher, while hydrolysable neutral sugar yields and carbon:nitrogen ratios were lower in this zone of both vascular plant vegetation and elevated oxygen exposure time. Thus, peat formed during the Holocene Climatic Optimum did not appear to be more extensively decomposed than peat formed during subsequent cooler periods. Comparison with a core from the West Siberian Lowland, Russia, indicates that oxygen exposure time and vegetation are both important controls on decomposition, while temperature appears to be of secondary importance. The low apparent sensitivity of decomposition to temperature is consistent with recent observations of a positive correlation between peat accumulation rates and mean annual temperature, suggesting that contemporary warming could enhance peatland carbon sequestration, although this could be offset by an increasing contribution of vascular plants to the vegetation.
Litter decay controlled by temperature, not soil properties, affecting future soil carbon.
Gregorich, Edward G; Janzen, Henry; Ellert, Benjamin H; Helgason, Bobbi L; Qian, Budong; Zebarth, Bernie J; Angers, Denis A; Beyaert, Ronald P; Drury, Craig F; Duguid, Scott D; May, William E; McConkey, Brian G; Dyck, Miles F
2017-04-01
Widespread global changes, including rising atmospheric CO 2 concentrations, climate warming and loss of biodiversity, are predicted for this century; all of these will affect terrestrial ecosystem processes like plant litter decomposition. Conversely, increased plant litter decomposition can have potential carbon-cycle feedbacks on atmospheric CO 2 levels, climate warming and biodiversity. But predicting litter decomposition is difficult because of many interacting factors related to the chemical, physical and biological properties of soil, as well as to climate and agricultural management practices. We applied 13 C-labelled plant litter to soil at ten sites spanning a 3500-km transect across the agricultural regions of Canada and measured its decomposition over five years. Despite large differences in soil type and climatic conditions, we found that the kinetics of litter decomposition were similar once the effect of temperature had been removed, indicating no measurable effect of soil properties. A two-pool exponential decay model expressing undecomposed carbon simply as a function of thermal time accurately described kinetics of decomposition. (R 2 = 0.94; RMSE = 0.0508). Soil properties such as texture, cation exchange capacity, pH and moisture, although very different among sites, had minimal discernible influence on decomposition kinetics. Using this kinetic model under different climate change scenarios, we projected that the time required to decompose 50% of the litter (i.e. the labile fractions) would be reduced by 1-4 months, whereas time required to decompose 90% of the litter (including recalcitrant fractions) would be reduced by 1 year in cooler sites to as much as 2 years in warmer sites. These findings confirm quantitatively the sensitivity of litter decomposition to temperature increases and demonstrate how climate change may constrain future soil carbon storage, an effect apparently not influenced by soil properties. © 2016 Her Majesty the Queen in Right of Canada. Global Change Biology. Published by 2016 John Wiley & Sons Ltd.
Miclele Renschin; Hal O. Leichty; Michael G. Shelton
2001-01-01
Although fire has been used extensively over long periods of time in loblolly pine (Pinis taeda L.) ecosystems, little is known concerning the effects of frequent fire use on nutrient cycling and decomposition. To better understand the long-term effects of fire on these processes, foliar litter decomposition rates were quantified in a study...
Climate fails to predict wood decomposition at regional scales
NASA Astrophysics Data System (ADS)
Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.
2014-07-01
Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.
Microbial Signatures of Cadaver Gravesoil During Decomposition.
Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T
2016-04-01
Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.
Study on the decomposition of trace benzene over V2O5-WO3 ...
Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet
Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability.
Arnell, Karen M; Joanisse, Marc F; Klein, Raymond M; Busseri, Michael A; Tannock, Rosemary
2009-09-01
The Rapid Automatized Naming (RAN) test involves rapidly naming sequences of items presented in a visual array. RAN has generated considerable interest because RAN performance predicts reading achievement. This study sought to determine what elements of RAN are responsible for the shared variance between RAN and reading performance using a series of cognitive tasks and a latent variable modelling approach. Participants performed RAN measures, a test of reading speed and comprehension, and six tasks, which tapped various hypothesised components of the RAN. RAN shared 10% of the variance with reading comprehension and 17% with reading rate. Together, the decomposition tasks explained 52% and 39% of the variance shared between RAN and reading comprehension and between RAN and reading rate, respectively. Significant predictors suggested that working memory encoding underlies part of the relationship between RAN and reading ability.
NASA Astrophysics Data System (ADS)
Mao, J.; Chen, N.; Harmon, M. E.; Li, Y.; Cao, X.; Chappell, M.
2012-12-01
Advanced 13C solid-state NMR techniques were employed to study the chemical structural changes of litter decomposition across broad spatial and long time scales. The fresh and decomposed litter samples of four species (Acer saccharum (ACSA), Drypetes glauca (DRGL), Pinus resinosa (PIRE), and Thuja plicata (THPL)) incubated for up to 10 years at four sites under different climatic conditions (from Arctic to tropical forest) were examined. Decomposition generally led to an enrichment of cutin and surface wax materials, and a depletion of carbohydrates causing overall composition to become more similar compared with original litters. However, the changes of main constituents in the four litters were inconsistent with the four litters following different pathways of decomposition at the same site. As decomposition proceeded, waxy materials decreased at the early stage and then gradually increased in PIRE; DRGL showed a significant depletion of lignin and tannin while the changes of lignin and tannin were relative small and inconsistent for ACSA and THPL. In addition, the NCH groups, which could be associated with either fungal cell wall chitin or bacterial wall petidoglycan, were enriched in all litters except THPL. Contrary to the classic lignin-enrichment hypothesis, DRGL with low-quality C substrate had the highest degree of composition changes. Furthermore, some samples had more "advanced" compositional changes in the intermediate stage of decomposition than in the highly-decomposed stage. This pattern might be attributed to the formation of new cross-linking structures, that rendered substrates more complex and difficult for enzymes to attack. Finally, litter quality overrode climate and time factors as a control of long-term changes of chemical composition.
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.
Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K
2009-12-03
The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.
Modal Decomposition of TTV: Inferring Planet Masses and Eccentricities
NASA Astrophysics Data System (ADS)
Linial, Itai; Gilbaum, Shmuel; Sari, Re’em
2018-06-01
Transit timing variations (TTVs) are a powerful tool for characterizing the properties of transiting exoplanets. However, inferring planet properties from the observed timing variations is a challenging task, which is usually addressed by extensive numerical searches. We propose a new, computationally inexpensive method for inverting TTV signals in a planetary system of two transiting planets. To the lowest order in planetary masses and eccentricities, TTVs can be expressed as a linear combination of three functions, which we call the TTV modes. These functions depend only on the planets’ linear ephemerides, and can be either constructed analytically, or by performing three orbital integrations of the three-body system. Given a TTV signal, the underlying physical parameters are found by decomposing the data as a sum of the TTV modes. We demonstrate the use of this method by inferring the mass and eccentricity of six Kepler planets that were previously characterized in other studies. Finally we discuss the implications and future prospects of our new method.
Time-frequency dynamics of resting-state brain connectivity measured with fMRI.
Chang, Catie; Glover, Gary H
2010-03-01
Most studies of resting-state functional connectivity using fMRI employ methods that assume temporal stationarity, such as correlation and data-driven decompositions computed across the duration of the scan. However, evidence from both task-based fMRI studies and animal electrophysiology suggests that functional connectivity may exhibit dynamic changes within time scales of seconds to minutes. In the present study, we investigated the dynamic behavior of resting-state connectivity across the course of a single scan, performing a time-frequency coherence analysis based on the wavelet transform. We focused on the connectivity of the posterior cingulate cortex (PCC), a primary node of the default-mode network, examining its relationship with both the "anticorrelated" ("task-positive") network as well as other nodes of the default-mode network. It was observed that coherence and phase between the PCC and the anticorrelated network was variable in time and frequency, and statistical testing based on Monte Carlo simulations revealed the presence of significant scale-dependent temporal variability. In addition, a sliding-window correlation procedure identified other regions across the brain that exhibited variable connectivity with the PCC across the scan, which included areas previously implicated in attention and salience processing. Although it is unclear whether the observed coherence and phase variability can be attributed to residual noise or modulation of cognitive state, the present results illustrate that resting-state functional connectivity is not static, and it may therefore prove valuable to consider measures of variability, in addition to average quantities, when characterizing resting-state networks. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Han, Si-ping; van Duin, Adri C T; Goddard, William A; Strachan, Alejandro
2011-05-26
We studied the thermal decomposition and subsequent reaction of the energetic material nitromethane (CH(3)NO(2)) using molecular dynamics with ReaxFF, a first principles-based reactive force field. We characterize the chemistry of liquid and solid nitromethane at high temperatures (2000-3000 K) and density 1.97 g/cm(3) for times up to 200 ps. At T = 3000 K the first reaction in the decomposition of nitromethane is an intermolecular proton transfer leading to CH(3)NOOH and CH(2)NO(2). For lower temperatures (T = 2500 and 2000 K) the first reaction during decomposition is often an isomerization reaction involving the scission of the C-N bond the formation of a C-O bond to form methyl nitrate (CH(3)ONO). Also at very early times we observe intramolecular proton transfer events. The main product of these reactions is H(2)O which starts forming following those initiation steps. The appearance of H(2)O marks the beginning of the exothermic chemistry. Recent quantum-mechanics-based molecular dynamics simulations on the chemical reactions and time scales for decomposition of a crystalline sample heated to T = 3000 K for a few picoseconds are in excellent agreement with our results, providing an important, direct validation of ReaxFF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
Chowriappa, Ashirwad J; Shi, Yi; Raza, Syed Johar; Ahmed, Kamran; Stegemann, Andrew; Wilding, Gregory; Kaouk, Jihad; Peabody, James O; Menon, Mani; Hassett, James M; Kesavadas, Thenkurussi; Guru, Khurshid A
2013-12-01
A standardized scoring system does not exist in virtual reality-based assessment metrics to describe safe and crucial surgical skills in robot-assisted surgery. This study aims to develop an assessment score along with its construct validation. All subjects performed key tasks on previously validated Fundamental Skills of Robotic Surgery curriculum, which were recorded, and metrics were stored. After an expert consensus for the purpose of content validation (Delphi), critical safety determining procedural steps were identified from the Fundamental Skills of Robotic Surgery curriculum and a hierarchical task decomposition of multiple parameters using a variety of metrics was used to develop Robotic Skills Assessment Score (RSA-Score). Robotic Skills Assessment mainly focuses on safety in operative field, critical error, economy, bimanual dexterity, and time. Following, the RSA-Score was further evaluated for construct validation and feasibility. Spearman correlation tests performed between tasks using the RSA-Scores indicate no cross correlation. Wilcoxon rank sum tests were performed between the two groups. The proposed RSA-Score was evaluated on non-robotic surgeons (n = 15) and on expert-robotic surgeons (n = 12). The expert group demonstrated significantly better performance on all four tasks in comparison to the novice group. Validation of the RSA-Score in this study was carried out on the Robotic Surgical Simulator. The RSA-Score is a valid scoring system that could be incorporated in any virtual reality-based surgical simulator to achieve standardized assessment of fundamental surgical tents during robot-assisted surgery. Copyright © 2013 Elsevier Inc. All rights reserved.
Density-dependent liquid nitromethane decomposition: molecular dynamics simulations based on ReaxFF.
Rom, Naomi; Zybin, Sergey V; van Duin, Adri C T; Goddard, William A; Zeiri, Yehuda; Katz, Gil; Kosloff, Ronnie
2011-09-15
The decomposition mechanism of hot liquid nitromethane at various compressions was studied using reactive force field (ReaxFF) molecular dynamics simulations. A competition between two different initial thermal decomposition schemes is observed, depending on compression. At low densities, unimolecular C-N bond cleavage is the dominant route, producing CH(3) and NO(2) fragments. As density and pressure rise approaching the Chapman-Jouget detonation conditions (∼30% compression, >2500 K) the dominant mechanism switches to the formation of the CH(3)NO fragment via H-transfer and/or N-O bond rupture. The change in the decomposition mechanism of hot liquid NM leads to a different kinetic and energetic behavior, as well as products distribution. The calculated density dependence of the enthalpy change correlates with the change in initial decomposition reaction mechanism. It can be used as a convenient and useful global parameter for the detection of reaction dynamics. Atomic averaged local diffusion coefficients are shown to be sensitive to the reactions dynamics, and can be used to distinguish between time periods where chemical reactions occur and diffusion-dominated, nonreactive time periods. © 2011 American Chemical Society
Dynamics in the Decompositions Approach to Quantum Mechanics
NASA Astrophysics Data System (ADS)
Harding, John
2017-12-01
In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method
2009-01-01
Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of
Oliveira, Diego L; Soares, Thiago F; Vasconcelos, Simão D
2016-01-01
Insects associated with carrion can have parasitological importance as vectors of several pathogens and causal agents of myiasis to men and to domestic and wild animals. We tested the attractiveness of animal baits (chicken liver) at different stages of decomposition to necrophagous species of Diptera (Calliphoridae, Fanniidae, Muscidae, Phoridae and Sarcophagidae) in a rainforest fragment in Brazil. Five types of bait were used: fresh and decomposed at room temperature (26 °C) for 24, 48, 72 and 96 h. A positive correlation was detected between the time of decomposition and the abundance of Calliphoridae and Muscidae, whilst the abundance of adults of Phoridae decreased with the time of decomposition. Ten species of calliphorids were registered, of which Chrysomya albiceps, Chrysomya megacephala and Chloroprocta idioidea showed a positive significant correlation between abundance and decomposition. Specimens of Sarcophagidae and Fanniidae did not discriminate between fresh and highly decomposed baits. A strong female bias was registered for all species of Calliphoridae irrespective of the type of bait. The results reinforce the feasibility of using animal tissues as attractants to a wide diversity of dipterans of medical, parasitological and forensic importance in short-term surveys, especially using baits at intermediate stages of decomposition.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.
1984-01-01
A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-08-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Zaug, J M; Burnham, A K
The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less
Integrated structure/control design - Present methodology and future opportunities
NASA Technical Reports Server (NTRS)
Weisshaar, T. A.; Newsom, J. R.; Zeiler, T. A.; Gilbert, M. G.
1986-01-01
Attention is given to current methodology applied to the integration of the optimal design process for structures and controls. Multilevel linear decomposition techniques proved to be most effective in organizing the computational efforts necessary for ISCD (integrated structures and control design) tasks. With the development of large orbiting space structures and actively controlled, high performance aircraft, there will be more situations in which this concept can be applied.
Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section
Jia, Chaolong; Wei, Lili; Wang, Hanning; Yang, Jiulin
2014-01-01
Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described. PMID:25435869
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Dumas, Raphaël; Jacquelin, Eric
2017-09-06
The so-called soft tissue artefacts and wobbling masses have both been widely studied in biomechanics, however most of the time separately, from either a kinematics or a dynamics point of view. As such, the estimation of the stiffness of the springs connecting the wobbling masses to the rigid-body model of the lower limb, based on the in vivo displacements of the skin relative to the underling bone, has not been performed yet. For this estimation, the displacements of the skin markers in the bone-embedded coordinate systems are viewed as a proxy for the wobbling mass movement. The present study applied a structural vibration analysis method called smooth orthogonal decomposition to estimate this stiffness from retrospective simultaneous measurements of skin and intra-cortical pin markers during running, walking, cutting and hopping. For the translations about the three axes of the bone-embedded coordinate systems, the estimated stiffness coefficients (i.e. between 2.3kN/m and 55.5kN/m) as well as the corresponding forces representing the connection between bone and skin (i.e. up to 400N) and corresponding frequencies (i.e. in the band 10-30Hz) were in agreement with the literature. Consistently with the STA descriptions, the estimated stiffness coefficients were found subject- and task-specific. Copyright © 2017 Elsevier Ltd. All rights reserved.
Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior
Bridwell, David A.; Cavanagh, James F.; Collins, Anne G. E.; Nunez, Michael D.; Srinivasan, Ramesh; Stober, Sebastian; Calhoun, Vince D.
2018-01-01
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function. PMID:29632480
Chellali, Amine; Schwaitzberg, Steven D.; Jones, Daniel B.; Romanelli, John; Miller, Amie; Rattner, David; Roberts, Kurt E.; Cao, Caroline G.L.
2014-01-01
Background NOTES is an emerging technique for performing surgical procedures, such as cholecystectomy. Debate about its real benefit over the traditional laparoscopic technique is on-going. There have been several clinical studies comparing NOTES to conventional laparoscopic surgery. However, no work has been done to compare these techniques from a Human Factors perspective. This study presents a systematic analysis describing and comparing different existing NOTES methods to laparoscopic cholecystectomy. Methods Videos of endoscopic/laparoscopic views from fifteen live cholecystectomies were analyzed to conduct a detailed task analysis of the NOTES technique. A hierarchical task analysis of laparoscopic cholecystectomy and several hybrid transvaginal NOTES cholecystectomies was performed and validated by expert surgeons. To identify similarities and differences between these techniques, their hierarchical decomposition trees were compared. Finally, a timeline analysis was conducted to compare the steps and substeps. Results At least three variations of the NOTES technique were used for cholecystectomy. Differences between the observed techniques at the substep level of hierarchy and on the instruments being used were found. The timeline analysis showed an increase in time to perform some surgical steps and substeps in NOTES compared to laparoscopic cholecystectomy. Conclusion As pure NOTES is extremely difficult given the current state of development in instrumentation design, most surgeons utilize different hybrid methods – combination of endoscopic and laparoscopic instruments/optics. Results of our hierarchical task analysis yielded an identification of three different hybrid methods to perform cholecystectomy with significant variability amongst them. The varying degrees to which laparoscopic instruments are utilized to assist in NOTES methods appear to introduce different technical issues and additional tasks leading to an increase in the surgical time. The NOTES continuum of invasiveness is proposed here as a classification scheme for these methods, which was used to construct a clear roadmap for training and technology development. PMID:24902811
Effect of Aging on ERP Components of Cognitive Control
Kropotov, Juri; Ponomarev, Valery; Tereshchenko, Ekaterina P.; Müller, Andreas; Jäncke, Lutz
2016-01-01
As people age, their performance on tasks requiring cognitive control often declines. Such a decline is frequently explained as either a general or specific decline in cognitive functioning with age. In the context of hypotheses suggesting a general decline, it is often proposed that processing speed generally declines with age. A further hypothesis is that an age-related compensation mechanism is associated with a specific cognitive decline. One prominent theory is the compensation hypothesis, which proposes that deteriorated functions are compensated for by higher performing functions. In this study, we used event-related potentials (ERPs) in the context of a GO/NOGO task to examine the age-related changes observed during cognitive control in a large group of healthy subjects aged between 18 and 84 years. The main question we attempted to answer was whether we could find neurophysiological support for either a general decline in processing speed or a compensation strategy. The subjects performed a relatively demanding cued GO/NOGO task with similar omissions and reaction times across the five age groups. The ERP waves of cognitive control, such as N2, P3cue and CNV, were decomposed into latent components by means of a blind source separation method. Based on this decomposition, it was possible to more precisely delineate the different neurophysiological and psychological processes involved in cognitive control. These data support the processing speed hypothesis because the latencies of all cognitive control ERP components increased with age, by 8 ms per decade for the early components (<200 ms) and by 20 ms per decade for the late components. At the same time, the compensatory hypothesis of aging was also supported, as the amplitudes of the components localized in posterior brain areas decreased with age, while those localized in the prefrontal cortical areas increased with age in order to maintain performance on this simple task at a relatively stable level. PMID:27092074
Nakasaki, Kiyohiko; Ohtaki, Akihito
2002-01-01
Using dog food as a model of the organic waste that comprises composting raw material, the degradation pattern of organic materials was examined by continuously measuring the quantity of CO2 evolved during the composting process in both batch and fed-batch operations. A simple numerical model was made on the basis of three suppositions for describing the organic matter decomposition in the batch operation. First, a certain quantity of carbon in the dog food was assumed to be recalcitrant to degradation in the composting reactor within the retention time allowed. Second, it was assumed that the decomposition rate of carbon is proportional to the quantity of easily degradable carbon, that is, the carbon recalcitrant to degradation was subtracted from the total carbon remaining in the dog food. Third, a certain lag time is assumed to occur before the start of active decomposition of organic matter in the dog food; this lag corresponds to the time required for microorganisms to proliferate and become active. It was then ascertained that the decomposition pattern for the organic matter in the dog food during the fed-batch operation could be predicted by the numerical model with the parameters obtained from the batch operation. This numerical model was modified so that the change in dry weight of composting materials could be obtained. The modified model was found suitable for describing the organic matter decomposition pattern in an actual fed-batch composting operation of the garbage obtained from a restaurant, approximately 10 kg d(-1) loading for 60 d.
Laser decontamination and decomposition of PCB-containing paint
NASA Astrophysics Data System (ADS)
Anthofer, A.; Kögler, P.; Friedrich, C.; Lippmann, W.; Hurtado, A.
2017-01-01
Decontamination of concrete surfaces contaminated with paint containing polychlorinated biphenyls is an elaborate and complex task that must be performed within the scope of nuclear power plant dismantling as well as conventional pollutant cleanup in buildings. The state of the art is mechanical decontamination, which generates dust as well as secondary waste and is both dangerous and physically demanding. Moreover, the ablated PCB-containing paint has to be treated in a separate process step. Laser technology offers a multitude of possibilities for contactless surface treatment with no restoring forces and a high potential for automation. An advanced experimental setup was developed for performing standard laser decontamination investigations on PCB-painted concrete surfaces. As tested with epoxy paints, a high-power diode laser with a laser power of 10 kW in continuous wave (CW) mode was implemented and resulted in decontamination of the concrete surfaces as well as significant PCB decomposition. The experimental results showed PCB removal of 96.8% from the concrete surface and PCB decomposition of 88.8% in the laser decontamination process. Significant PCDD/F formation was thereby avoided. A surface ablation rate of approx. 7.2 m2/h was realized.
Morais, Helena; Ramos, Cristina; Forgács, Esther; Cserháti, Tibor; Oliviera, José
2002-04-25
The effect of light, storage time and temperature on the decomposition rate of monomeric anthocyanin pigments extracted from skins of grape (Vitis vinifera var. Red globe) was determined by reversed-phase high-performance liquid chromatography (RP-HPLC). The impact of various storage conditions on the pigment stability was assessed by stepwise regression analysis. RP-HPLC separated well the five anthocyanins identified and proved the presence of other unidentified pigments at lower concentrations. Stepwise regression analysis confirmed that the overall decomposition rate of monomeric anthocyanins, peonidin-3-glucoside and malvidin-3-glucoside significantly depended on the time and temperature of storage, the effect of storage time being the most important. The presence or absence of light exerted a negligible impact on the decomposition rate.
William S. Currie; Mark E. Harmon; Ingrid C. Burke; Stephen C. Hart; William J. Parton; Whendee L. Silver
2009-01-01
We analyzed results from 10-year long field incubations of foliar and fine root litter from the Long-term lntersite Decomposition Experiment Team (LIDET) study. We tested whether a variety of climate and litter quality variables could be used to develop regression models of decomposition parameters across wide ranges in litter quality and climate and whether these...
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-09-01
Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.
Code of Federal Regulations, 2012 CFR
2012-07-01
... times. (c) Vanadium decomposition wet air pollution control. BPT Limitations for the Secondary... average mg/kg (pounds per million pounds) of vanadium produced by decomposition Arsenic 0.000 0.000...
Code of Federal Regulations, 2014 CFR
2014-07-01
... times. (c) Vanadium decomposition wet air pollution control. BPT Limitations for the Secondary... average mg/kg (pounds per million pounds) of vanadium produced by decomposition Arsenic 0.000 0.000...
Code of Federal Regulations, 2013 CFR
2013-07-01
... times. (c) Vanadium decomposition wet air pollution control. BPT Limitations for the Secondary... average mg/kg (pounds per million pounds) of vanadium produced by decomposition Arsenic 0.000 0.000...
Ma, Haixia; Yan, Biao; Li, Zhaona; Guan, Yulei; Song, Jirong; Xu, Kangzhen; Hu, Rongzu
2009-09-30
NTOxDNAZ was prepared by mixing 3,3-dinitroazetidine (DNAZ) and 3-nitro-1,2,4-triazol-5-one (NTO) in ethanol solution. The thermal behavior of the title compound was studied under a non-isothermal condition by DSC and TG/DTG methods. The kinetic parameters were obtained from analysis of the DSC and TG/DTG curves by Kissinger method, Ozawa method, the differential method and the integral method. The main exothermic decomposition reaction mechanism of NTOxDNAZ is classified as chemical reaction, and the kinetic parameters of the reaction are E(a)=149.68 kJ mol(-1) and A=10(15.81)s(-1). The specific heat capacity of the title compound was determined with continuous C(p) mode of microcalorimeter. The standard mole specific heat capacity of NTOxDNAZ was 352.56 J mol(-1)K(-1) in 298.15K. Using the relationship between C(p) and T and the thermal decomposition parameters, the time of the thermal decomposition from initialization to thermal explosion (adiabatic time-to-explosion) was obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
NASA Astrophysics Data System (ADS)
Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei
2017-08-01
How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
NASA Astrophysics Data System (ADS)
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
NASA Astrophysics Data System (ADS)
Fredenberg, Michael Duane
The idea that problems and tasks play a pivotal role in a mathematics lesson has a long standing in mathematics education research. Recent calls for teaching reform appeal for training teachers to better understand how students learn mathematics and to employ students' mathematical thinking as the basis for pedagogy (CCSSM, 2010; NCTM, 2000; NRC 1999). The teaching practices of (a) developing a task for a mathematics lesson and, (b) modifying the task for students while enacting the lesson fit within the scope of supporting students' mathematical thinking. Surprisingly, an extensive search of the literature did not yield any research aimed to identify and refine the constituent parts of the aforementioned teaching practices in the manner called for by Grossman and xiii colleagues (2009). Consequently, my research addresses the two questions: (a) what factors do exemplary elementary teachers consider when developing a task for a mathematics lesson? (b) what factors do they consider when they modify a task for a student when enacting a lesson? I conducted a multiple case study involving three elementary teachers, each with extensive training in the area of Cognitively Guided Instruction (CGI), as well as several years experience teaching mathematics following the principles of CGI (Carpenter et al., 1999). I recorded video of three mathematics lessons with each participant and after each lesson I conducted a semi-structured stimulated recall interview. A subsequent follow-up clinical interview was conducted soon thereafter to further explore the teacher's thoughts (Ginsberg, 1997). In addition, my methodology included interjecting myself at select times during a lesson to ask the teacher to explain her reasoning. Qualitative analysis led to a framework that identified four categories of influencing factors and seven categories of supporting objectives for the development of a task. Subsets of these factors and objectives emerged as particularly relevant when the teachers decided to modify a task. Moreover, relationships between and among the various factors were identified. The emergent framework from this study offers insight into decompositions of the two teaching practices of interest, and, in particular, the utility of the number choices made by the teachers.
NASA Astrophysics Data System (ADS)
Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung
2015-04-01
Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.
Abril, Meritxell; Muñoz, Isabel; Casas-Ruiz, Joan P; Gómez-Gener, Lluís; Barceló, Milagros; Oliva, Francesc; Menéndez, Margarita
2015-06-01
Mediterranean rivers are extensively modified by flow regulation practises along their courses. An important part of the river impoundment in this area is related to the presence of small dams constructed mainly for water abstraction purposes. These projects drastically modified the ecosystem morphology, transforming lotic into lentic reaches and increasing their alternation along the river. Hydro-morphologial differences between these reaches indicate that flow regulation can trigger important changes in the ecosystem functioning. Decomposition of organic matter is an integrative process and this complexity makes it a good indicator of changes in the ecosystem. The aim of this study was to assess the effect caused by flow regulation on ecosystem functioning at the river network scale, using wood decomposition as a functional indicator. We studied the mass loss from wood sticks during three months in different lotic and lentic reaches located along a Mediterranean river basin, in both winter and summer. Additionally, we identified the environmental factors affecting decomposition rates along the river orders. The results revealed differences in decomposition rates between sites in both seasons that were principally related to the differences between stream orders. The rates were mainly related to temperature, nutrient concentrations (NO2(-), NO3(2-)) and water residence time. High-order streams with higher temperature and nutrient concentrations exhibited higher decomposition rates compared with low-order streams. The effect of the flow regulation on the decomposition rates only appeared to be significant in high orders, especially in winter, when the hydrological characteristics of lotic and lentic habitats widely varied. Lotic reaches with lower water residence time exhibited greater decomposition rates compared with lentic reaches probably due to more physical abrasion and differences in the microbial assemblages. Overall, our study revealed that in high orders the reduction of flow caused by flow regulation affects the wood decomposition indicating changes in ecosystem functioning. Copyright © 2015 Elsevier B.V. All rights reserved.
Functional and Structural Succession of Soil Microbial Communities below Decomposing Human Cadavers
Cobaugh, Kelly L.; Schaeffer, Sean M.; DeBruyn, Jennifer M.
2015-01-01
The ecological succession of microbes during cadaver decomposition has garnered interest in both basic and applied research contexts (e.g. community assembly and dynamics; forensic indicator of time since death). Yet current understanding of microbial ecology during decomposition is almost entirely based on plant litter. We know very little about microbes recycling carcass-derived organic matter despite the unique decomposition processes. Our objective was to quantify the taxonomic and functional succession of microbial populations in soils below decomposing cadavers, testing the hypotheses that a) periods of increased activity during decomposition are associated with particular taxa; and b) human-associated taxa are introduced to soils, but do not persist outside their host. We collected soils from beneath four cadavers throughout decomposition, and analyzed soil chemistry, microbial activity and bacterial community structure. As expected, decomposition resulted in pulses of soil C and nutrients (particularly ammonia) and stimulated microbial activity. There was no change in total bacterial abundances, however we observed distinct changes in both function and community composition. During active decay (7 - 12 days postmortem), respiration and biomass production rates were high: the community was dominated by Proteobacteria (increased from 15.0 to 26.1% relative abundance) and Firmicutes (increased from 1.0 to 29.0%), with reduced Acidobacteria abundances (decreased from 30.4 to 9.8%). Once decay rates slowed (10 - 23 d postmortem), respiration was elevated, but biomass production rates dropped dramatically; this community with low growth efficiency was dominated by Firmicutes (increased to 50.9%) and other anaerobic taxa. Human-associated bacteria, including the obligately anaerobic Bacteroides, were detected at high concentrations in soil throughout decomposition, up to 198 d postmortem. Our results revealed the pattern of functional and compositional succession in soil microbial communities during decomposition of human-derived organic matter, provided insight into decomposition processes, and identified putative predictor populations for time since death estimation. PMID:26067226
FACETS: multi-faceted functional decomposition of protein interaction networks
Seah, Boon-Siew; Bhowmick, Sourav S.; Forbes Dewey, C.
2012-01-01
Motivation: The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein–protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. Results: We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Contact: seah0097@ntu.edu.sg or assourav@ntu.edu.sg Supplementary information: Supplementary data are available at the Bioinformatics online. Availability: Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/∼assourav/Facets/ PMID:22908217
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA
Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.
2008-01-01
Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences. Specifically, trial level data evidenced more and more varied theta measures, and accounted for less overall variance. PMID:17027110
Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading
NASA Astrophysics Data System (ADS)
Oh, Joo Won; Lee, Won Sik; Park, Seong Jin
2018-01-01
Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.
The initial changes of fat deposits during the decomposition of human and pig remains.
Notter, Stephanie J; Stuart, Barbara H; Rowe, Rebecca; Langlois, Neil
2009-01-01
The early stages of adipocere formation in both pig and human adipose tissue in aqueous environments have been investigated. The aims were to determine the short-term changes occurring to fat deposits during decomposition and to ascertain the suitability of pigs as models for human decomposition. Subcutaneous adipose tissue from both species after immersion in distilled water for up to six months was compared using Fourier transform infrared spectroscopy, gas chromatography-mass spectrometry and inductively coupled plasma-mass spectrometry. Changes associated with decomposition were observed, but no adipocere was formed during the initial month of decomposition for either tissue type. Early-stage adipocere formation in pig samples during later months was detected. The variable time courses for adipose tissue decomposition were attributed to differences in the distribution of total fatty acids between species. Variations in the amount of sodium, potassium, calcium, and magnesium were also detected between species. The study shows that differences in total fatty acid composition between species need to be considered when interpreting results from experimental decomposition studies using pigs as human body analogs.
NASA Astrophysics Data System (ADS)
Yi, Feng; DeLisio, Jeffery B.; Nguyen, Nam; Zachariah, Michael R.; LaVan, David A.
2017-12-01
The thermodynamics and evolved gases were measured during the rapid decomposition of copper oxide (CuO) thin film at rates exceeding 100,000 K/s. CuO decomposes to release oxygen when heated and serves as an oxidizer in reactive composites and chemical looping combustion. Other instruments have shown either one or two decomposition steps during heating. We have confirmed that CuO decomposes by two steps at both slower and higher heating rates. The decomposition path influences the reaction course in reactive Al/CuO/Al composites, and full understanding is important in designing reactive mixtures and other new reactive materials.
Reactivity of fluoroalkanes in reactions of coordinated molecular decomposition
NASA Astrophysics Data System (ADS)
Pokidova, T. S.; Denisov, E. T.
2017-08-01
Experimental results on the coordinated molecular decomposition of RF fluoroalkanes to olefin and HF are analyzed using the model of intersecting parabolas (IPM). The kinetic parameters are calculated to allow estimates of the activation energy ( E) and rate constant ( k) of these reactions, based on enthalpy and IPM algorithms. Parameters E and k are found for the first time for eight RF decomposition reactions. The factors that affect activation energy E of RF decomposition (the enthalpy of the reaction, the electronegativity of the atoms of reaction centers, and the dipole-dipole interaction of polar groups) are determined. The values of E and k for reverse reactions of addition are estimated.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.
1980-11-01
this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Forensically significant scavenging guilds in the southwest of Western Australia.
O'Brien, R Christopher; Forbes, Shari L; Meyer, Jan; Dadour, Ian
2010-05-20
Estimation of time since death is an important factor in forensic investigations and the state of decomposition of a body is a prime basis for such estimations. The rate of decomposition is, however, affected by many environmental factors such as temperature, rainfall, and solar radiation as well as by indoor or outdoor location, covering and the type of surface the body is resting upon. Scavenging has the potential for major impact upon the rate of decomposition of a body, but there is little direct research upon its effect. The information that is available relates almost exclusively to North American and European contexts. The Australian faunal assemblage is unique in that it includes no native large predators or large detrivorous avians. This research investigates the animals that scavenge carcasses in natural outdoor settings in southern Western Australia and the factors which can affect each scavenger's activity. The research was conducted at four locations around Perth, Western Australia with different environmental conditions. Pig carcasses, acting as models for the human body, were positioned in an outdoor environment with no protection from scavengers or other environmental conditions. Twenty-four hour continuous time-lapse video capture was used to observe the pattern of visits of all animals to the carcasses. The time of day, length of feeding, material fed upon, area of feeding, and any movement of the carcass were recorded for each feeding event. Some species were observed to scavenge almost continually throughout the day and night. Insectivores visited the carcasses mostly during bloat and putrefaction; omnivores fed during all stages of decomposition and scavenging by carnivores, rare at any time, was most likely to occur during the early stages of decomposition. Avian species, which were the most prolific visitors to the carcasses in all locations, like reptiles, fed only during daylight hours. Only mammals and amphibians, which were seldom seen during diurnal hours, were nocturnal feeders. The combined effects of the whole guild of scavengers significantly accelerated the later stages of decomposition, especially in the cooler months of the year when natural decomposition was slowest.
A study of photothermal laser ablation of various polymers on microsecond time scales.
Kappes, Ralf S; Schönfeld, Friedhelm; Li, Chen; Golriz, Ali A; Nagel, Matthias; Lippert, Thomas; Butt, Hans-Jürgen; Gutmann, Jochen S
2014-01-01
To analyze the photothermal ablation of polymers, we designed a temperature measurement setup based on spectral pyrometry. The setup allows to acquire 2D temperature distributions with 1 μm size and 1 μs time resolution and therefore the determination of the center temperature of a laser heating process. Finite element simulations were used to verify and understand the heat conversion and heat flow in the process. With this setup, the photothermal ablation of polystyrene, poly(α-methylstyrene), a polyimide and a triazene polymer was investigated. The thermal stability, the glass transition temperature Tg and the viscosity above Tg were governing the ablation process. Thermal decomposition for the applied laser pulse of about 10 μs started at temperatures similar to the start of decomposition in thermogravimetry. Furthermore, for polystyrene and poly(α-methylstyrene), both with a Tg in the range between room and decomposition temperature, ablation already occurred at temperatures well below the decomposition temperature, only at 30-40 K above Tg. The mechanism was photomechanical, i.e. a stress due to the thermal expansion of the polymer was responsible for ablation. Low molecular weight polymers showed differences in photomechanical ablation, corresponding to their lower Tg and lower viscosity above the glass transition. However, the difference in ablated volume was only significant at higher temperatures in the temperature regime for thermal decomposition at quasi-equilibrium time scales.
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Hierarchical Goal Network Planning: Initial Results
2011-05-31
svikas@cs.umd.edu Ugur Kuter Smart Information Flow Technologies 211 North 1st Street Minneapolis, MN 55401 USA ukuter@sift.net Dana S. Nau Dept. of...inferred. References [1] Ron Alford, Ugur Kuter, and Dana S. Nau. Translating HTNs to PDDL: A small amount of domain knowledge can go a long way. In...10] Ugur Kuter, Dana S. Nau, Marco Pistore, and Paolo Traverso. Task decomposition on abstract states, for planning under nondeterminism. Artif
System Re-engineering Project Executive Summary
1991-11-01
Management Information System (STAMIS) application. This project involved reverse engineering, evaluation of structured design and object-oriented design, and re- implementation of the system in Ada. This executive summary presents the approach to re-engineering the system, the lessons learned while going through the process, and issues to be considered in future tasks of this nature.... Computer-Aided Software Engineering (CASE), Distributed Software, Ada, COBOL, Systems Analysis, Systems Design, Life Cycle Development, Functional Decomposition, Object-Oriented
CrossTalk: The Journal of Defense Software Engineering. Volume 27, Number 1, January/February 2014
2014-02-01
deficit in trustworthiness and will permit analysis on how this deficit needs to be overcome. This analysis will help identify adaptations that are...approaches to trustworthy analysis split into two categories: product-based and process-based. Product-based techniques [9] identify factors that...Criticalities may also be assigned to decompositions and contributions. 5. Evaluation and analysis : in this task the propagation rules of the NFR
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
NASA Astrophysics Data System (ADS)
Keiser, A. D.; Strickland, M. S.; Fierer, N.; Bradford, M. A.
2011-02-01
Historical resource conditions appear to influence microbial community function. With time, historical influences might diminish as populations respond to the contemporary environment. Alternatively, they may persist given factors such as contrasting genetic potentials for adaptation to a new environment. Using experimental microcosms, we test competing hypotheses that function of distinct soil microbial communities in common environments (H1a) converge or (H1b) remain dissimilar over time. Using a 6 × 2 (soil community inoculum × litter environment) full-factorial design, we compare decomposition rates in experimental microcosms containing grass or hardwood litter environments. After 100 days, communities that develop are inoculated into fresh litters and decomposition followed for another 100 days. We repeat this for a third, 100-day period. In each successive, 100-day period, we find higher decomposition rates (i.e. functioning) suggesting communities function better when they have an experimental history of the contemporary environment. Despite these functional gains, differences in decomposition rates among initially distinct communities persist, supporting the hypothesis that dissimilarity is maintained across time. In contrast to function, community composition is more similar following a common, experimental history. We also find that "specialization" on one experimental environment incurs a cost, with loss of function in the alternate environment. For example, experimental history of a grass-litter environment reduced decomposition when communities were inoculated into a hardwood-litter environment. Our work demonstrates experimentally that despite expectations of fast growth rates, physiological flexibility and rapid evolution, initial functional differences between microbial communities are maintained across time. These findings question whether microbial dynamics can be omitted from models of ecosystem processes if we are to predict reliably global change effects on biogeochemical cycles.
NASA Astrophysics Data System (ADS)
Keiser, A. D.; Strickland, M. S.; Fierer, N.; Bradford, M. A.
2011-06-01
Historical resource conditions appear to influence microbial community function. With time, historical influences might diminish as populations respond to the contemporary environment. Alternatively, they may persist given factors such as contrasting genetic potentials for adaptation to a new environment. Using experimental microcosms, we test competing hypotheses that function of distinct soil microbial communities in common environments (H1a) converge or (H1b) remain dissimilar over time. Using a 6 × 2 (soil community inoculum × litter environment) full-factorial design, we compare decomposition rates in experimental microcosms containing grass or hardwood litter environments. After 100 days, communities that develop are inoculated into fresh litters and decomposition followed for another 100 days. We repeat this for a third, 100-day period. In each successive, 100-day period, we find higher decomposition rates (i.e. functioning) suggesting communities function better when they have an experimental history of the contemporary environment. Despite these functional gains, differences in decomposition rates among initially distinct communities persist, supporting the hypothesis that dissimilarity is maintained across time. In contrast to function, community composition is more similar following a common, experimental history. We also find that "specialization" on one experimental environment incurs a cost, with loss of function in the alternate environment. For example, experimental history of a grass-litter environment reduced decomposition when communities were inoculated into a hardwood-litter environment. Our work demonstrates experimentally that despite expectations of fast growth rates, physiological flexibility and rapid evolution, initial functional differences between microbial communities are maintained across time. These findings question whether microbial dynamics can be omitted from models of ecosystem processes if we are to predict reliably global change effects on biogeochemical cycles.
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1977-01-01
Two networks consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self delay and interference delay.
Shen, Wangbing; Yuan, Yuan; Liu, Chang; Zhang, Xiaojiang; Luo, Jing; Gong, Zhe
2016-12-01
The question of whether creative insight varies across problem types has recently come to the forefront of studies of creative cognition. In the present study, to address the nature of creative insight, the coordinate-based activation likelihood estimation (ALE) technique was utilized to individually conduct three quantitative meta-analyses of neuroimaging experiments that used the compound remote associate (CRA) task, the prototype heuristic (PH) task and the Chinese character chunk decomposition (CCD) task. These tasks were chosen because they are frequently used to uncover the neurocognitive correlates of insight. Our results demonstrated that creative insight reliably activates largely non-overlapping brain regions across task types, with the exception of some shared regions: the CRA task mainly relied on the right parahippocampal gyrus, the superior frontal gyrus and the inferior frontal gyrus; the PH task primarily depended on the right middle occipital gyrus (MOG), the bilateral superior parietal lobule/precuneus, the left inferior parietal lobule, the left lingual gyrus and the left middle frontal gyrus; and the CCD task activated a broad cerebral network consisting of most dorsolateral and medial prefrontal regions, frontoparietal regions and the right MOG. These results provide the first neural evidence of the task dependence of creative insight. The implications of these findings for resolving conflict surrounding the different theories of creative cognition and for defining insight as a set of heterogeneous processes are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Verbs in the lexicon: Why is hitting easier than breaking?
McKoon, Gail; Love, Jessica
2011-11-01
Adult speakers use verbs in syntactically appropriate ways. For example, they know implicitly that the boy hit at the fence is acceptable but the boy broke at the fence is not. We suggest that this knowledge is lexically encoded in semantic decompositions. The decomposition for break verbs (e.g. crack, smash) is hypothesized to be more complex than that for hit verbs (e.g. kick, kiss). Specifically, the decomposition of a break verb denotes that "an entity changes state as the result of some external force" whereas the decomposition for a hit verb denotes only that "an entity potentially comes in contact with another entity." In this article, verbs of the two types were compared in a lexical decision experiment - Experiment 1 - and they were compared in sentence comprehension experiments with transitive sentences (e.g. the car hit the bicycle and the car broke the bicycle) - Experiments 2 and 3. In Experiment 1, processing times were shorter for the hit than the break verbs and in Experiments 2 and 3, processing times were shorter for the hit sentences than the break sentences, results that are in accord with the complexities of the postulated semantic decompositions.
Delay decomposition at a single server queue with constant service time and multiple inputs
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1978-01-01
Two network consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self-delay and interference delay.
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Bilingual reading of compound words.
Ko, In Yeong; Wang, Min; Kim, Say Young
2011-02-01
The present study investigated whether bilingual readers activate constituents of compound words in one language while processing compound words in the other language via decomposition. Two experiments using a lexical decision task were conducted with adult Korean-English bilingual readers. In Experiment 1, the lexical decision of real English compound words was more accurate when the translated compounds (the combination of the translation equivalents of the constituents) in Korean (the nontarget language) were real words than when they were nonwords. In Experiment 2, when the frequency of the second constituents of compound words in English (the target language) was manipulated, the effect of lexical status of the translated compounds was greater on the compounds with high-frequency second constituents than on those with low-frequency second constituents in the target language. Together, these results provided evidence for morphological decomposition and cross-language activation in bilingual reading of compound words.
Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P
2016-04-13
An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
Telephone-quality pathological speech classification using empirical mode decomposition.
Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S
2011-01-01
This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.
An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation
Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie
2014-01-01
In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912
A simple method for decomposition of peracetic acid in a microalgal cultivation system.
Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won
2015-03-01
A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Comparison of litter decomposition in a natural versus coal-slurry pond reclaimed as a wetland
Taylor, J.; Middleton, B.A.
2004-01-01
Decomposition is a key function in reclaimed wetlands, and changes in its rate have ramifications for organic-matter accumulation, nutrient cycling, and production. The purpose of this study was to compare leaf litter decomposition rates in coal-slurry ponds vs. natural wetlands on natural floodplain wetlands in Illinois, USA. The rate of decomposition was slower in the natural wetland vs. the coal pond (k=0.0043??0.0008 vs. 0.0066??0.0011, respectively); the soil of the natural wetland was more acidic than the coal pond in this study (pH=5.3 vs. 7.9, respectively). Similarly, higher organic matter levels were related to lower pH levels, and organic matter levels were seven-times higher in the natural wetland than in the coal pond. The coal slurry pond was five years old at the time of the study, while the natural oxbow wetland was older (more than 550 years). The coal-slurry pond was originally a floodplain wetland (slough); the downstream end was blocked with a stoplog structure and the oxbow filled with slurry. The pattern of decomposition for all species in the coal pond was the same as in the natural pond; Potomogeton nodosus decomposed more quickly than Phragmites australis, and both of these species decomposed more quickly than either Typha latifolia or Cyperus erythrorhizos (k=0.0121??0.0008, 0.0051??0.0006, 0.0024??0.0001, 0-0024??0.0004, respectively). Depending on how open or closed the system is to outside inputs, decomposition rate regulates other functions such as production, nutrient cycling, organic-layer accumulation in the soil, and the timing and nature of delivery of detritus to the food chain. ?? 2004 John Wiley and Sons, Ltd.
Microbial decomposition is highly sensitive to leaf litter emersion in a permanent temperate stream.
Mora-Gómez, Juanita; Duarte, Sofia; Cássio, Fernanda; Pascoal, Cláudia; Romaní, Anna M
2018-04-15
Drought frequency and intensity in some temperate regions are forecasted to increase under the ongoing global change, which might expose permanent streams to intermittence and have severe repercussions on stream communities and ecosystem processes. In this study, we investigated the effect of drought duration on microbial decomposition of Populus nigra leaf litter in a temperate permanent stream (Oliveira, NW Portugal). Specifically, we measured the response of the structural (assemblage composition, bacterial and fungal biomass) and functional (leaf litter decomposition, extracellular enzyme activities (EEA), and fungal sporulation) parameters of fungal and bacterial communities on leaf litter exposed to emersion during different time periods (7, 14 and 21d). Emersion time affected microbial assemblages and litter decomposition, but the response differed among variables. Leaf decomposition rates and the activity of β-glucosidase, cellobiohydrolase and phosphatase were gradually reduced with increasing emersion time, while β-xylosidase reduction was similar when emersion last for 7 or more days, and the phenol oxidase reduction was similar at 14 and 21days of leaf emersion. Microbial biomass and fungal sporulation were reduced after 21days of emersion. The structure of microbial assemblages was affected by the duration of the emersion period. The shifts in fungal assemblages were correlated with a decreased microbial capacity to degrade lignin and hemicellulose in leaf litter exposed to emersion. Additionally, some resilience was observed in leaf litter mass loss, bacterial biomass, some enzyme activities and structure of fungal assemblages. Our study shows that drought can strongly alter structural and functional aspects of microbial decomposers. Therefore, the exposure of leaf litter to increasing emersion periods in temperate streams is expected to affect decomposer communities and overall decomposition of plant material by decelerating carbon cycling in streams. Copyright © 2017 Elsevier B.V. All rights reserved.
Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O
2018-05-01
In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Florindo, João. Batista
2018-04-01
This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.
Visual target modulation of functional connectivity networks revealed by self-organizing group ICA.
van de Ven, Vincent; Bledowski, Christoph; Prvulovic, David; Goebel, Rainer; Formisano, Elia; Di Salle, Francesco; Linden, David E J; Esposito, Fabrizio
2008-12-01
We applied a data-driven analysis based on self-organizing group independent component analysis (sogICA) to fMRI data from a three-stimulus visual oddball task. SogICA is particularly suited to the investigation of the underlying functional connectivity and does not rely on a predefined model of the experiment, which overcomes some of the limitations of hypothesis-driven analysis. Unlike most previous applications of ICA in functional imaging, our approach allows the analysis of the data at the group level, which is of particular interest in high order cognitive studies. SogICA is based on the hierarchical clustering of spatially similar independent components, derived from single subject decompositions. We identified four main clusters of components, centered on the posterior cingulate, bilateral insula, bilateral prefrontal cortex, and right posterior parietal and prefrontal cortex, consistently across all participants. Post hoc comparison of time courses revealed that insula, prefrontal cortex and right fronto-parietal components showed higher activity for targets than for distractors. Activation for distractors was higher in the posterior cingulate cortex, where deactivation was observed for targets. While our results conform to previous neuroimaging studies, they also complement conventional results by showing functional connectivity networks with unique contributions to the task that were consistent across subjects. SogICA can thus be used to probe functional networks of active cognitive tasks at the group-level and can provide additional insights to generate new hypotheses for further study. Copyright 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Shechter, M.; Chefetz, B.
2009-04-01
Plant cuticle materials, especially the highly aliphatic biopolymers cutin and cutan, have been reported as highly efficient natural sorbents. The objective of this study was to examine the effects of decomposition on their sorption behavior with naphthol and phenanthrene. The level of cutin and cutan was reduced by 15 and 27% respectively during the first 3 mo of incubation. From that point, the level of the cutan did not change, while the level of the cutin continued to decrease up to 32% after 20 mo. 13C NMR analysis suggested transformation of cutan mainly within its alkyl-C structure which are assigned as crystalline moieties. Cutin, however, did not exhibit significant structure changes with time. The level of humic-like substances increased due to cutin decomposition but was not influenced in the cutan system after 20 mo of incubation. This indicates that the cutin biopolymer has been decomposed and transformed into humic-like substances, whereas the cutan was less subject to transformation. Decomposition affected sorption properties in similar trends for both cutin and cutan. The Freundlich capacity coefficients (KFOC) of naphthol were much lower than phenanthrene and were less influenced by the decomposition, whereas with phenanthrene KFOC values increased significantly with time. Naphthol exhibited non-linear isotherms; and nonlinearity was decreased with incubation time. In contrast, phenanthrene isotherms were more linear and showed only moderate change with time. The decrease in the linearity of naphthol isotherms might relate to the transformation of the sorption sites due to structural changes in the biopolymers. However, with phenanthrene, these changes did not affect sorption linearity but increased sorption affinities mainly for cutan. This is probably due to decomposition of the rigid alkyl-C moieties in the cutan biopolymer. Our data suggest that both biopolymers were relatively stable in the soil for 20 mo. Cutan is less degradable than cutin and therefore is more likely to accumulate in soils and contribute to the refractory aliphatic components of soil organic matter.
NASA Astrophysics Data System (ADS)
Gulis, V.; Ferreira, V. J.; Graca, M. A.
2005-05-01
Traditional approaches to assess stream ecosystem health rely on structural parameters, e.g. a variety of biotic indices. The goal of the Europe-wide RivFunction project is to develop methodology that uses functional parameters (e.g. plant litter decomposition) to this end. Here we report on decomposition experiments carried out in Portugal in five pairs of streams that differed in dissolved inorganic nutrients. On average, decomposition rates of alder and oak leaves were 2.8 and 1.4 times higher in high nutrient streams in coarse and fine mesh bags, respectively, than in corresponding reference streams. Breakdown rate correlated better with stream water SRP concentration rather than TIN. Fungal biomass and sporulation rates of aquatic hyphomycetes associated with decomposing leaves were stimulated by higher nutrient levels. Both fungal parameters measured at very early stages of decomposition (e.g. days 7-13) correlated well with overall decomposition rates. Eutrophication had no significant effect on shredder abundances in leaf bags but species richness was higher in disturbed streams. Decomposition is a key functional parameter in streams integrating many other variables and can be useful in assessing stream ecosystem health. We also argue that because decomposition is often controlled by fungal activity, microbial parameters can also be useful in bioassessment.
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
NASA Astrophysics Data System (ADS)
De Waal, D.; Heyns, A. M.; Range, K.-J.
1989-06-01
Raman spectroscopy was used as a method in the kinetic investigation of the thermal decomposition of solid (NH 4) 2CrO 4. Time-dependent measurements of the intensity of the totally symmetric stretching CrO mode of (NH 4) 2CrO 4 have been made between 343 and 363 K. A short initial acceleratory period is observed at lower temperatures and the decomposition reaction decelerates after the maximum decomposition rate has been reached at all temperatures. These results can be interpreted in terms of the Avrami-Erofe'ev law 1 - (χ r) {1}/{2} = kt , where χr is the fraction of reactant at time t. At 358 K, k is equal to 1.76 ± 0.01 × 10 -3 sec -1 for microcrystals and for powdered samples. Activation energies of 97 ± 10 and 49 ± 0.9 kJ mole -1 have been calculated for microcrystalline and powdered samples, respectively.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
Toward an Efficient Icing CFD Process Using an Interactive Software Toolkit: Smagglce 2D
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Choo, Yung K.; Schilling, Herbert W.; Baez, Marivell; Braun, Donald C.; Cotton, Barbara J.
2001-01-01
Two-dimensional CID analysis for iced airfoils can be a labor-intensive task. The software toolkit SmaggIce 2D is being developed to help streamline the CID process and provide the unique features needed for icing. When complete, it will include a combination of partially automated and fully interactive tools for all aspects of the tasks leading up to the flow analysis: geometry preparation, domain decomposition. block boundary demoralization. gridding, and linking with a flow solver. It also includes tools to perform ice shape characterization, an important aid in determining the relationship between ice characteristics and their effects on aerodynamic performance. Completed tools, work-in-progress, and planned features of the software toolkit are presented here.
Decomposition of carbon dioxide by recombining hydrogen plasma with ultralow electron temperature
NASA Astrophysics Data System (ADS)
Yamazaki, Masahiro; Nishiyama, Shusuke; Sasaki, Koichi
2018-06-01
We examined the rate coefficient for the decomposition of CO2 in low-pressure recombining hydrogen plasmas with electron temperatures between 0.15 and 0.45 eV, where the electron-impact dissociation was negligible. By using this ultralow-temperature plasma, we clearly observed decomposition processes via vibrational excited states. The rate coefficient of the overall reaction, CO2 + e → products, was 1.5 × 10‑17 m3/s in the ultralow-temperature plasma, which was 10 times larger than the decomposition rate coefficient of 2 × 10‑18 m3/s in an ionizing plasma with an electron temperature of 4 eV.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
Sornborger, Andrew T; Lauderdale, James D
2016-11-01
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
Rahman, Md Mostafizur; Fattah, Shaikh Anowarul
2017-01-01
In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.
NASA Astrophysics Data System (ADS)
Volkov, R. S.; Zhdanova, A. O.; Kuznetsov, G. V.; Strizhak, P. A.
2017-07-01
From the results of experimental studies of the processes of suppressing the thermal decomposition of the typical forest combustibles (birch leaves, fir needles, asp twigs, and a mixture of these three materials) by water aerosol, the minimum volumes of the fire-extinguishing liquid have been determined (by varying the volume of samples of the forest combustibles from 0.00002 m3 to 0.0003 m3 and the area of their open surface from 0.0001 m2 to 0.018 m2). The dependences of the minimum volume of water on the area of the open surface of the forest combustible have been established. Approximation expressions for these dependences have been obtained. Forecast has been made of the minimum volume of water for suppressing the process of thermal decomposition of forest combustibles in areas from 1 cm2 to 1 km2, as well as of the characteristic quenching times by varying the water concentration per unit time. It has been shown that the amount of water needed for effective suppression of the process of thermal decomposition of forest combustibles is several times less than is customarily assumed.
Computer implemented empirical mode decomposition method, apparatus and article of manufacture
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
1999-01-01
A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
Saliasi, Emi; Geerligs, Linda; Lorist, Monicque M.; Maurits, Natasha M.
2014-01-01
To investigate which neural correlates are associated with successful working memory performance, fMRI was recorded in healthy younger and older adults during performance on an n-back task with varying task demands. To identify functional networks supporting working memory processes, we used independent component analysis (ICA) decomposition of the fMRI data. Compared to younger adults, older adults showed a larger neural (BOLD) response in the more complex (2-back) than in the baseline (0-back) task condition, in the ventral lateral prefrontal cortex (VLPFC) and in the right fronto-parietal network (FPN). Our results indicated that a higher BOLD response in the VLPFC was associated with increased performance accuracy in older adults, in both the baseline and the more complex task condition. This ‘BOLD-performance’ relationship suggests that the neural correlates linked with successful performance in the older adults are not uniquely related to specific working memory processes present in the complex but not in the baseline task condition. Furthermore, the selective presence of this relationship in older but not in younger adults suggests that increased neural activity in the VLPFC serves a compensatory role in the aging brain which benefits task performance in the elderly. PMID:24911016
Decomposition of heterogeneous organic matterand its long-term stabilization in soils
Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.
2011-01-01
Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.
Environmental fate of emamectin benzoate after tree micro injection of horse chestnut trees.
Burkhard, Rene; Binz, Heinz; Roux, Christian A; Brunner, Matthias; Ruesch, Othmar; Wyss, Peter
2015-02-01
Emamectin benzoate, an insecticide derived from the avermectin family of natural products, has a unique translocation behavior in trees when applied by tree micro injection (TMI), which can result in protection from insect pests (foliar and borers) for several years. Active ingredient imported into leaves was measured at the end of season in the fallen leaves of treated horse chestnut (Aesculus hippocastanum) trees. The dissipation of emamectin benzoate in these leaves seems to be biphasic and depends on the decomposition of the leaf. In compost piles, where decomposition of leaves was fastest, a cumulative emamectin benzoate degradation half-life time of 20 d was measured. In leaves immersed in water, where decomposition was much slower, the degradation half-life time was 94 d, and in leaves left on the ground in contact with soil, where decomposition was slowest, the degradation half-life time was 212 d. The biphasic decline and the correlation with leaf decomposition might be attributed to an extensive sorption of emamectin benzoate residues to leaf macromolecules. This may also explain why earthworms ingesting leaves from injected trees take up very little emamectin benzoate and excrete it with the feces. Furthermore, no emamectin benzoate was found in water containing decomposing leaves from injected trees. It is concluded, that emamectin benzoate present in abscised leaves from horse chestnut trees injected with the insecticide is not available to nontarget organisms present in soil or water bodies. Published 2014 SETAC.
1978-01-01
Analytical Test Methodology Sampling and analysis of thermal decomposition products are formidable tasks (Rasbash, 1967; Gaskill, 1973; Bankston ...by a flowing solution. A Sample Gas Inlet B Alkali Solution Inlet C Gas and Solution Outlet D Specific Ion Electrode E Reference Electrode E D 1 0 1 2...of radiant heat (Zinn, Powell, Cassanova and Bankston , 1977) ° Seader and Ou have recently proposed a theory relating optical density to particulate
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
Laser-assisted solar cell metallization processing
NASA Technical Reports Server (NTRS)
Dutta, S.
1984-01-01
Laser assisted processing techniques utilized to produce the fine line, thin metal grid structures that are required to fabricate high efficiency solar cells are investigated. The tasks comprising these investigations are summarized. Metal deposition experiments are carried out utilizing laser assisted pyrolysis of a variety of metal bearing polymer films and metalloorganic inks spun onto silicon substrates. Laser decomposition of spun on silver neodecanoate ink yields very promising results. Solar cell comb metallization patterns are written using this technique.
Hahne, Anja; Mueller, Jutta L; Clahsen, Harald
2006-01-01
This study reports the results of two behavioral and two event-related brain potential experiments examining the processing of inflected words in second-language (L2) learners with Russian as their native language. Two different subsystems of German inflection were studied, participial inflection and noun plurals. For participial forms, L2 learners were found to widely generalize the -t suffixation rule in a nonce-word elicitation task, and in the event-related brain potential experiment, they showed an anterior negativity followed by a P600-both results resembling previous findings from native speakers of German on the same materials. For plural formation, the L2 learners displayed different preference patterns for regular and irregular forms in an off-line plural judgment task. Regular and irregular plural forms also differed clearly with regard to their brain responses. Whereas overapplications of the -s plural rule produced a P600 component, overapplications of irregular patterns elicited an N400. In contrast to native speakers of German, however, the L2 learners did not show an anterior negativity for -s plural overapplications. Taken together, the results show clear dissociations between regular and irregular inflection for both morphological subsystems. We argue that the two processing routes posited by dual-mechanism models of inflection (lexical storage and morphological decomposition) are also employed by L2 learners.
Analysis of dystonic tremor in musicians using empirical mode decomposition.
Lee, A; Schoonderwaldt, E; Chadde, M; Altenmüller, E
2015-01-01
Test the hypotheses that tremor amplitude in musicians with task-specific dystonia is higher at the affected finger (dystonic tremor, DT) or the adjacent finger (tremor associated with dystonia, TAD) than (1) in matched fingers of healthy musicians and non-musicians and (2) within patients in the unaffected and non-adjacent fingers of the affected side within patients. We measured 21 patients, 21 healthy musicians and 24 non-musicians. Participants exerted a flexion-extension movement. Instantaneous frequency and amplitude values were obtained with empirical mode decomposition and a Hilbert-transform, allowing to compare tremor amplitudes throughout the movement at various frequency ranges. We did not find a significant difference in tremor amplitude between patients and controls for either DT or TAD. Neither differed tremor amplitude in the within-patient comparisons. Both hypotheses were rejected and apparently neither DT nor TAD occur in musician's dystonia of the fingers. This is the first study assessing DT and TAD in musician's dystonia. Our finding suggests that even though MD is an excellent model for malplasticity due to excessive practice, it does not seem to provide a good model for DT. Rather it seems that musician's dystonia may manifest itself either as dystonic cramping without tremor or as task-specific tremor without overt dystonic cramping. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Yan; Wang, Zhihui
2015-12-01
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
Xue, Kun; Wang, Lei; An, Jin; Xu, Jianbin
2011-05-13
The thermal decomposition of ultrathin HfO(2) films (∼0.6-1.2 nm) on Si by ultrahigh vacuum annealing (25-800 °C) is investigated in situ in real time by scanning tunneling microscopy. Two distinct thickness-dependent decomposition behaviors are observed. When the HfO(2) thickness is ∼ 0.6 nm, no discernible morphological changes are found below ∼ 700 °C. Then an abrupt reaction occurs at 750 °C with crystalline hafnium silicide nanostructures formed instantaneously. However, when the thickness is about 1.2 nm, the decomposition proceeds gradually with the creation and growth of two-dimensional voids at 800 °C. The observed thickness-dependent behavior is closely related to the SiO desorption, which is believed to be the rate-limiting step of the decomposition process.
Nanomaterial release characteristics in a single-walled carbon nanotube manufacturing workplace
NASA Astrophysics Data System (ADS)
Ji, Jun Ho; Kim, Jong Bum; Lee, Gwangjae; Bae, Gwi-Nam
2015-02-01
As carbon nanotubes (CNTs) are widely used in various applications, exposure assessment also increases in importance with other various toxicity tests for CNTs. We conducted 24-h continuous nanoaerosol measurements to identify possible nanomaterial release in a single-walled carbon nanotube (SWCNT) manufacturing workplace. Four real-time aerosol instruments were used to determine the nanosized and microsized particle numbers, particle surface area, and carbonaceous species. Task-based exposure assessment was carried out for SWCNT synthesis using the arc plasma and thermal decomposition processes to remove amorphous carbon components as impurities. During the SWCNT synthesis, the black carbon (BC) concentration was 2-12 μg/m3. The maximum BC mass concentrations occurred when the synthesis chamber was opened for harvesting the SWCNTs. The number concentrations of particles with sizes 10-420 nm were 10,000-40,000 particles/cm3 during the tasks. The maximum number concentration existed when a vacuum pump was operated to remove exhaust air from the SWCNT synthesis chamber due to the penetration of highly concentrated oil mists through the window opened. We analyzed the particle mass size distribution and particle number size distribution for each peak episode. Using real-time aerosol detectors, we distinguished the SWCNT releases from background nanoaerosols such as oil mist and atmospheric photochemical smog particles. SWCNT aggregates with sizes of 1-10 μm were mainly released from the arc plasma synthesis. The harvesting process was the main release route of SWCNTs in the workplace.
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Catalysts for the decomposition of hydrazine, hydrazine derivatives and mixtures of both
NASA Technical Reports Server (NTRS)
Sasse, R.
1986-01-01
This invention concerns a catalyst designed for the decomposition of hydrazine, hydrazine derivatives and mixtures of the two. The objective is to develop a catalyst of the type described that is cheap and easy to produce and is also characterized by extremely short response times.
Simultaneous Tensor Decomposition and Completion Using Factor Priors.
Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark
2013-08-27
Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
Wang, Deyun; Liu, Yanling; Luo, Hongyuan; Yue, Chenqiang; Cheng, Sheng
2017-01-01
Accurate PM2.5 concentration forecasting is crucial for protecting public health and atmospheric environment. However, the intermittent and unstable nature of PM2.5 concentration series makes its forecasting become a very difficult task. In order to improve the forecast accuracy of PM2.5 concentration, this paper proposes a hybrid model based on wavelet transform (WT), variational mode decomposition (VMD) and back propagation (BP) neural network optimized by differential evolution (DE) algorithm. Firstly, WT is employed to disassemble the PM2.5 concentration series into a number of subsets with different frequencies. Secondly, VMD is applied to decompose each subset into a set of variational modes (VMs). Thirdly, DE-BP model is utilized to forecast all the VMs. Fourthly, the forecast value of each subset is obtained through aggregating the forecast results of all the VMs obtained from VMD decomposition of this subset. Finally, the final forecast series of PM2.5 concentration is obtained by adding up the forecast values of all subsets. Two PM2.5 concentration series collected from Wuhan and Tianjin, respectively, located in China are used to test the effectiveness of the proposed model. The results demonstrate that the proposed model outperforms all the other considered models in this paper. PMID:28704955
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
NASA Astrophysics Data System (ADS)
Abrokwah, K.; O'Reilly, A. M.
2017-12-01
Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.
Forbes, Shari L.; Perrault, Katelynn A.; Stefanuto, Pierre-Hugues; Nizio, Katie D.; Focant, Jean-François
2014-01-01
The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography – time-of-flight mass spectrometry (GC×GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC×GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs. PMID:25412504
Forbes, Shari L; Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Nizio, Katie D; Focant, Jean-François
2014-01-01
The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography--time-of-flight mass spectrometry (GC × GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC × GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs.
Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.
Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus
2009-02-01
While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Decomposition of multilayer benzene and n-hexane films on vanadium.
Souda, Ryutaro
2015-09-21
Reactions of multilayer hydrocarbon films with a polycrystalline V substrate have been investigated using temperature-programmed desorption and time-of-flight secondary ion mass spectrometry. Most of the benzene molecules were dissociated on V, as evidenced by the strong depression in the thermal desorption yields of physisorbed species at 150 K. The reaction products dehydrogenated gradually after the multilayer film disappeared from the surface. Large amount of oxygen was needed to passivate the benzene decomposition on V. These behaviors indicate that the subsurface sites of V play a role in multilayer benzene decomposition. Decomposition of the n-hexane multilayer films is manifested by the desorption of methane at 105 K and gradual hydrogen desorption starting at this temperature, indicating that C-C bond scission precedes C-H bond cleavage. The n-hexane dissociation temperature is considerably lower than the thermal desorption temperature of the physisorbed species (140 K). The n-hexane multilayer morphology changes at the decomposition temperature, suggesting that a liquid-like phase formed after crystallization plays a role in the low-temperature decomposition of n-hexane.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang
2015-12-01
The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.
Russell, Matthew B.; Woodall, Christopher W.; D'Amato, Anthony W.; Fraver, Shawn; Bradford, John B.
2014-01-01
Forest ecosystems play a critical role in mitigating greenhouse gas emissions. Forest carbon (C) is stored through photosynthesis and released via decomposition and combustion. Relative to C fixation in biomass, much less is known about C depletion through decomposition of woody debris, particularly under a changing climate. It is assumed that the increased temperatures and longer growing seasons associated with projected climate change will increase the decomposition rates (i.e., more rapid C cycling) of downed woody debris (DWD); however, the magnitude of this increase has not been previously addressed. Using DWD measurements collected from a national forest inventory of the eastern United States, we show that the residence time of DWD may decrease (i.e., more rapid decomposition) by as much as 13% over the next 200 years, depending on various future climate change scenarios and forest types. Although existing dynamic global vegetation models account for the decomposition process, they typically do not include the effect of a changing climate on DWD decomposition rates. We expect that an increased understanding of decomposition rates, as presented in this current work, will be needed to adequately quantify the fate of woody detritus in future forests. Furthermore, we hope these results will lead to improved models that incorporate climate change scenarios for depicting future dead wood dynamics in addition to a traditional emphasis on live-tree demographics.
Challenges of including nitrogen effects on decomposition in earth system models
NASA Astrophysics Data System (ADS)
Hobbie, S. E.
2011-12-01
Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.
Task analysis of information technology-mediated medication management in outpatient care.
van Stiphout, F; Zwart-van Rijkom, J E F; Maggio, L A; Aarts, J E C M; Bates, D W; van Gelder, T; Jansen, P A F; Schraagen, J M C; Egberts, A C G; ter Braak, E W M T
2015-09-01
Educating physicians in the procedural as well as cognitive skills of information technology (IT)-mediated medication management could be one of the missing links for the improvement of patient safety. We aimed to compose a framework of tasks that need to be addressed to optimize medication management in outpatient care. Formal task analysis: decomposition of a complex task into a set of subtasks. First, we obtained a general description of the medication management process from exploratory interviews. Secondly, we interviewed experts in-depth to further define tasks and subtasks. Outpatient care in different fields of medicine in six teaching and academic medical centres in the Netherlands and the United States. 20 experts. Tasks were divided up into procedural, cognitive and macrocognitive tasks and categorized into the three components of dynamic decision making. The medication management process consists of three components: (i) reviewing the medication situation; (ii) composing a treatment plan; and (iii) accomplishing and communicating a treatment and surveillance plan. Subtasks include multiple cognitive tasks such as composing a list of current medications and evaluating the reliability of sources, and procedural tasks such as documenting current medication. The identified macrocognitive tasks were: planning, integration of IT in workflow, managing uncertainties and responsibilities, and problem detection. All identified procedural, cognitive and macrocognitive skills should be included when designing education for IT-mediated medication management. The resulting framework supports the design of educational interventions to improve IT-mediated medication management in outpatient care. © 2015 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of The British Pharmacological Society.
Abstract-Reasoning Software for Coordinating Multiple Agents
NASA Technical Reports Server (NTRS)
Clement, Bradley; Barrett, Anthony; Rabideau, Gregg; Knight, Russell
2003-01-01
A computer program for scheduling the activities of multiple agents that share limited resources has been incorporated into the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles. In the original intended application, the agents would be multiple spacecraft and/or robotic vehicles engaged in scientific exploration of distant planets. The program could also be used on Earth in such diverse settings as production lines and military maneuvers. This program includes a planning/scheduling subprogram of the iterative repair type that reasons about the activities of multiple agents at abstract levels in order to greatly improve the scheduling of their use of shared resources. The program summarizes the information about the constraints on, and resource requirements of, abstract activities on the basis of the constraints and requirements that pertain to their potential refinements (decomposition into less-abstract and ultimately to primitive activities). The advantage of reasoning about summary information is that time needed to find consistent schedules is exponentially smaller than the time that would be needed for reasoning about the same tasks at the primitive level.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
Elastic and acoustic wavefield decompositions and application to reverse time migrations
NASA Astrophysics Data System (ADS)
Wang, Wenlong
P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk
2014-01-01
Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling. PMID:24699676
NASA Astrophysics Data System (ADS)
Wood, J. H.; Natali, S.
2014-12-01
The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.
Purahong, Witoon; Kapturska, Danuta; Pecyna, Marek J; Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk
2014-01-01
Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
Improving EMG based classification of basic hand movements using EMD.
Sapsanis, Christos; Georgoulas, George; Tzes, Anthony; Lymberopoulos, Dimitrios
2013-01-01
This paper presents a pattern recognition approach for the identification of basic hand movements using surface electromyographic (EMG) data. The EMG signal is decomposed using Empirical Mode Decomposition (EMD) into Intrinsic Mode Functions (IMFs) and subsequently a feature extraction stage takes place. Various combinations of feature subsets are tested using a simple linear classifier for the detection task. Our results suggest that the use of EMD can increase the discrimination ability of the conventional feature sets extracted from the raw EMG signal.
A system approach to aircraft optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.
Modular neural networks: a survey.
Auda, G; Kamel, M
1999-04-01
Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented.
An iterative requirements specification procedure for decision support systems.
Brookes, C H
1987-08-01
Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
McIntosh, Craig S; Dadour, Ian R; Voss, Sasha C
2017-05-01
The rate of decomposition and insect succession onto decomposing pig carcasses were investigated following burning of carcasses. Ten pig carcasses (40-45 kg) were exposed to insect activity during autumn (March-April) in Western Australia. Five replicates were burnt to a degree described by the Crow-Glassman Scale (CGS) level #2, while five carcasses were left unburnt as controls. Burning carcasses greatly accelerated decomposition in contrast to unburnt carcasses. Physical modifications following burning such as skin discolouration, splitting of abdominal tissue and leathery consolidation of skin eliminated evidence of bloat and altered microambient temperatures associated with carcasses throughout decomposition. Insect species identified on carcasses were consistent between treatment groups; however, a statistically significant difference in insect succession onto remains was evident between treatments (PERMANOVA F (1, 224) = 14.23, p < 0.01) during an 8-day period that corresponds with the wet stage of decomposition. Differences were noted in the arrival time of late colonisers (Coleoptera) and the development of colonising insects between treatment groups. Differences in the duration of decomposition stages and insect assemblages indicate that burning has an effect on both rate of decomposition and insect succession. The findings presented here provide baseline data for entomological casework involving burnt remains criminal investigations.
Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F
2016-03-01
Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-01-01
Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
USDA-ARS?s Scientific Manuscript database
To improve stand establishment in high crop residue situations, the utility of fertilizer to stimulate microbial decomposition of residue has been debated. Field experiments assessed winter wheat (Triticum aestivum) straw decomposition under different fertilizer rates and application timings at thre...
Behavioral networks as a model for intelligent agents
NASA Technical Reports Server (NTRS)
Sliwa, Nancy E.
1990-01-01
On-going work at NASA Langley Research Center in the development and demonstration of a paradigm called behavioral networks as an architecture for intelligent agents is described. This work focuses on the need to identify a methodology for smoothly integrating the characteristics of low-level robotic behavior, including actuation and sensing, with intelligent activities such as planning, scheduling, and learning. This work assumes that all these needs can be met within a single methodology, and attempts to formalize this methodology in a connectionist architecture called behavioral networks. Behavioral networks are networks of task processes arranged in a task decomposition hierarchy. These processes are connected by both command/feedback data flow, and by the forward and reverse propagation of weights which measure the dynamic utility of actions and beliefs.
Miller, Kai J; Honey, Christopher J; Hermes, Dora; Rao, Rajesh PN; denNijs, Marcel; Ojemann, Jeffrey G
2013-01-01
We illustrate a general principal of electrical potential measurements from the surface of the cerebral cortex, by revisiting and reanalyzing experimental work from the visual, language and motor systems. A naïve decomposition technique of electrocorticographic power spectral measurements reveals that broadband spectral changes reliably track task engagement. These broadband changes are shown to be a generic correlate of local cortical function across a variety of brain areas and behavioral tasks. Furthermore, they fit a power-law form that is consistent with simple models of the dendritic integration of asynchronous local population firing. Because broadband spectral changes covary with diverse perceptual and behavioral states on the timescale of 20–50ms, they provide a powerful and widely applicable experimental tool. PMID:24018305
Initial insights into bacterial succession during human decomposition.
Hyde, Embriette R; Haarmann, Daniel P; Petrosino, Joseph F; Lynne, Aaron M; Bucheli, Sibyl R
2015-05-01
Decomposition is a dynamic ecological process dependent upon many factors such as environment, climate, and bacterial, insect, and vertebrate activity in addition to intrinsic properties inherent to individual cadavers. Although largely attributed to microbial metabolism, very little is known about the bacterial basis of human decomposition. To assess the change in bacterial community structure through time, bacterial samples were collected from several sites across two cadavers placed outdoors to decompose and analyzed through 454 pyrosequencing and analysis of variable regions 3-5 of the bacterial 16S ribosomal RNA (16S rRNA) gene. Each cadaver was characterized by a change in bacterial community structure for all sites sampled as time, and decomposition, progressed. Bacteria community structure is variable at placement and before purge for all body sites. At bloat and purge and until tissues began to dehydrate or were removed, bacteria associated with flies, such as Ignatzschineria and Wohlfahrtimonas, were common. After dehydration and skeletonization, bacteria associated with soil, such as Acinetobacter, were common at most body sites sampled. However, more cadavers sampled through multiple seasons are necessary to assess major trends in bacterial succession.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
Filtering and left ventricle segmentation of the fetal heart in ultrasound images
NASA Astrophysics Data System (ADS)
Vargas-Quintero, Lorena; Escalante-Ramírez, Boris
2013-11-01
In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.
Bullinaria, John A; Levy, Joseph P
2012-09-01
In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.
Environmental Fate of Emamectin Benzoate After Tree Micro Injection of Horse Chestnut Trees
Burkhard, Rene; Binz, Heinz; Roux, Christian A; Brunner, Matthias; Ruesch, Othmar; Wyss, Peter
2015-01-01
Emamectin benzoate, an insecticide derived from the avermectin family of natural products, has a unique translocation behavior in trees when applied by tree micro injection (TMI), which can result in protection from insect pests (foliar and borers) for several years. Active ingredient imported into leaves was measured at the end of season in the fallen leaves of treated horse chestnut (Aesculus hippocastanum) trees. The dissipation of emamectin benzoate in these leaves seems to be biphasic and depends on the decomposition of the leaf. In compost piles, where decomposition of leaves was fastest, a cumulative emamectin benzoate degradation half-life time of 20 d was measured. In leaves immersed in water, where decomposition was much slower, the degradation half-life time was 94 d, and in leaves left on the ground in contact with soil, where decomposition was slowest, the degradation half-life time was 212 d. The biphasic decline and the correlation with leaf decomposition might be attributed to an extensive sorption of emamectin benzoate residues to leaf macromolecules. This may also explain why earthworms ingesting leaves from injected trees take up very little emamectin benzoate and excrete it with the feces. Furthermore, no emamectin benzoate was found in water containing decomposing leaves from injected trees. It is concluded, that emamectin benzoate present in abscised leaves from horse chestnut trees injected with the insecticide is not available to nontarget organisms present in soil or water bodies. Environ Toxicol Chem 2014;9999:1–6. © 2014 The Authors. Published 2014 SETAC PMID:25363584
Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Quantitative analysis of microbial biomass yield in aerobic bioreactor.
Watanabe, Osamu; Isoda, Satoru
2013-12-01
We have studied the integrated model of reaction rate equations with thermal energy balance in aerobic bioreactor for food waste decomposition and showed that the integrated model has the capability both of monitoring microbial activity in real time and of analyzing biodegradation kinetics and thermal-hydrodynamic properties. On the other hand, concerning microbial metabolism, it was known that balancing catabolic reactions with anabolic reactions in terms of energy and electron flow provides stoichiometric metabolic reactions and enables the estimation of microbial biomass yield (stoichiometric reaction model). We have studied a method for estimating real-time microbial biomass yield in the bioreactor during food waste decomposition by combining the integrated model with the stoichiometric reaction model. As a result, it was found that the time course of microbial biomass yield in the bioreactor during decomposition can be evaluated using the operational data of the bioreactor (weight of input food waste and bed temperature) by the combined model. The combined model can be applied to manage a food waste decomposition not only for controlling system operation to keep microbial activity stable, but also for producing value-added products such as compost on optimum condition. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Campos, Xochi; Germino, Matthew; de Graaff, Marie-Anne
2017-01-01
AimsChanging precipitation regimes in semiarid ecosystems will affect the balance of soil carbon (C) input and release, but the net effect on soil C storage is unclear. We asked how changes in the amount and timing of precipitation affect litter decomposition, and soil C stabilization in semiarid ecosystems.MethodsThe study took place at a long-term (18 years) ecohydrology experiment located in Idaho. Precipitation treatments consisted of a doubling of annual precipitation (+200 mm) added either in the cold-dormant season or in the growing season. Experimental plots were planted with big sagebrush (Artemisia tridentata), or with crested wheatgrass (Agropyron cristatum). We quantified decomposition of sagebrush leaf litter, and we assessed organic soil C (SOC) in aggregates, and silt and clay fractions.ResultsWe found that: (1) increased precipitation applied in the growing season consistently enhanced decomposition rates relative to the ambient treatment, and (2) precipitation applied in the dormant season enhanced soil C stabilization.ConclusionsThese data indicate that prolonged increases in precipitation can promote soil C storage in semiarid ecosystems, but only if these increases happen at times of the year when conditions allow for precipitation to promote plant C inputs rates to soil.
NASA Technical Reports Server (NTRS)
Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)
2003-01-01
A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
NASA Astrophysics Data System (ADS)
Yokozawa, M.
2017-12-01
Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.
NASA Astrophysics Data System (ADS)
Zangarini, Sara; Cattaneo, Cristina; Trombino, Luca
2014-05-01
The importance of the role played by soil scientists grows up in the modern forensic sciences, in particular when buried human remains strongly decomposed or skeletonized are found in different environment situations. An interdisciplinary team, formed by earth and legal medicine researchers from the University of Milan is working on several sets of experimental burial of pigs in different soil types and for different times of burial, in order to get new evidences on environmental responses to the burial, focusing specifically on geopedological and micropedological aspects. The present work is aimed at the micromorphological (petrographic microscope) and ultramicroscopic (SEM) cross characterization of bone tissue in buried remains, in order to describe bone alteration pathways due both to decomposition and to permanence in soil. These methods allow identifying in the tissues of analysed bones: - Unusual concentrations of metal oxides (i.e. Fe, Mn), in the form of violet-blue colorations (in XPL), which seem to be related to chemical conditions in the burial area; their presence could be a method to discriminate permanence in soil rather than a different environment of decomposition. - Magnesium phosphate (i.e. Mg3(PO4)2 ) crystallizations, usually noticed in bones buried from 7 to 103 weeks; their presence seems to be related to the decomposition both of the bones themselves and of soft tissues. - The presence of significant sulphur levels (i.e. SO3) in bones buried for over 7 weeks, which seem to be related to the transport and fixation of soft tissues decomposition fluids. These results point out that micromorphological techniques coupled with spatially resolved chemical analyses allow identifying both indicators of the permanence of the remains into the soil (i.e. metal oxides concentrations) and time-dependent markers of decomposition (i.e. significant sulphur levels and magnesium phosphate) in order to determine PMI (post-mortem-interval) and TSB (time-since-burial). Further studies and new experiments are in progress in order to better clarify the bone alteration pathways on different skeletal districts and in different kind of soils.
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Cai, Andong; Liang, Guopeng; Zhang, Xubo; Zhang, Wenju; Li, Ling; Rui, Yichao; Xu, Minggang; Luo, Yiqi
2018-05-01
Understanding drivers of straw decomposition is essential for adopting appropriate management practice to improve soil fertility and promote carbon (C) sequestration in agricultural systems. However, predicting straw decomposition and characteristics is difficult because of the interactions between many factors related to straw properties, soil properties, and climate, especially under future climate change conditions. This study investigated the driving factors of straw decomposition of six types of crop straw including wheat, maize, rice, soybean, rape, and other straw by synthesizing 1642 paired data from 98 published papers at spatial and temporal scales across China. All the data derived from the field experiments using little bags over twelve years. Overall, despite large differences in climatic and soil properties, the remaining straw carbon (C, %) could be accurately represented by a three-exponent equation with thermal time (accumulative temperature). The lignin/nitrogen and lignin/phosphorus ratios of straw can be used to define the size of labile, intermediate, and recalcitrant C pool. The remaining C for an individual type of straw in the mild-temperature zone was higher than that in the warm-temperature and subtropical zone within one calendar year. The remaining straw C after one thermal year was 40.28%, 37.97%, 37.77%, 34.71%, 30.87%, and 27.99% for rice, soybean, rape, wheat, maize, and other straw, respectively. Soil available nitrogen and phosphorus influenced the remaining straw C at different decomposition stages. For one calendar year, the total amount of remaining straw C was estimated to be 29.41 Tg and future temperature increase of 2 °C could reduce the remaining straw C by 1.78 Tg. These findings confirmed the long-term straw decomposition could be mainly driven by temperature and straw quality, and quantitatively predicted by thermal time with the three-exponent equation for a wide array of straw types at spatial and temporal scales in agro-ecosystems of China. Copyright © 2018 Elsevier B.V. All rights reserved.
Characteristic of root decomposition in a tropical rainforest in Sarawak, Malaysi
NASA Astrophysics Data System (ADS)
Ohashi, Mizue; Makita, Naoki; Katayam, Ayumi; Kume, Tomonori; Matsumoto, Kazuho; Khoon Kho, L.
2016-04-01
Woody roots play a significant role in forest carbon cycling, as up to 60 percent of tree photosynthetic production can be allocated to belowground. Root decay is one of the main processes of soil C dynamics and potentially relates to soil C sequestration. However, much less attention has been paid for root litter decomposition compared to the studies of leaf litter because roots are hidden from view. Previous studies have revealed that physico-chemical quality of roots, climate, and soil organisms affect root decomposition significantly. However, patterns and mechanisms of root decomposition are still poorly understood because of the high variability of root properties, field environment and potential decomposers. For example, root size would be a factor controlling decomposition rates, but general understanding of the difference between coarse and fine root decompositions is still lacking. Also, it is known that root decomposition is performed by soil animals, fungi and bacteria, but their relative importance is poorly understood. In this study, therefore, we aimed to characterize the root decomposition in a tropical rainforest in Sarawak, Malaysia, and clarify the impact of soil living organisms and root sizes on root litter decomposition. We buried soil cores with fine and coarse root litter bags in soil in Lambir Hills National Park. Three different types of soil cores that are covered by 1.5 cm plastic mesh, root-impermeable sheet (50um) and fungi-impermeable sheet (1um) were prepared. The soil cores were buried in February 2013 and collected 4 times, 134 days, 226 days, 786 days and 1151 days after the installation. We found that nearly 80 percent of the coarse root litter was decomposed after two years, whereas only 60 percent of the fine root litter was decomposed. Our results also showed significantly different ratio of decomposition between different cores, suggesting the different contribution of soil living organisms to decomposition process.
Microbial community assembly and metabolic function during mammalian corpse decomposition
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-01
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Microbial community assembly and metabolic function during mammalian corpse decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalf, J. L.; Xu, Z. Z.; Weiss, S.
2015-12-10
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
Humphreys, Michael K; Panacek, Edward; Green, William; Albers, Elizabeth
2013-03-01
Protocols for determining postmortem submersion interval (PMSI) have long been problematic for forensic investigators due to the wide variety of factors affecting the rate of decomposition of submerged carrion. Likewise, it has been equally problematic for researchers to develop standardized experimental protocols to monitor underwater decomposition without artificially affecting the decomposition rate. This study compares two experimental protocols: (i) underwater in situ evaluation with photographic documentation utilizing the Heaton et al. total aquatic decomposition (TAD) score and (ii) weighing the carrion before and after submersion. Complete forensic necropsies were performed as a control. Perinatal piglets were used as human analogs. The results of this study indicate that in order to objectively measure decomposition over time, the human analog should be examined at depth using the TAD scoring system rather than utilizing a carrion weight evaluation. The acquired TAD score can be used to calculate an approximate PMSI. © 2012 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa
2018-04-01
This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.
3D quantitative analysis of early decomposition changes of the human face.
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
2018-03-01
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Associational Patterns of Scavenger Beetles to Decomposition Stages.
Zanetti, Noelia I; Visciarelli, Elena C; Centeno, Nestor D
2015-07-01
Beetles associated with carrion play an important role in recycling organic matter in an ecosystem. Four experiments on decomposition, one per season, were conducted in a semirural area in Bahía Blanca, Argentina. Melyridae are reported for the first time of forensic interest. Apart from adults and larvae of Scarabaeidae, thirteen species and two genera of other coleopteran families are new forensic records in Argentina. Diversity, abundance, and species composition of beetles showed differences between stages and seasons. Our results differed from other studies conducted in temperate regions. Four guilds and succession patterns were established in relation to decomposition stages and seasons. Dermestidae (necrophages) predominated in winter during the decomposition process; Staphylinidae (necrophiles) in Fresh and Bloat stages during spring, summer, and autumn; and Histeridae (necrophiles) and Cleridae (omnivores) in the following stages during those seasons. Finally, coleopteran activity, diversity and abundance, and decomposition rate change with biogeoclimatic characteristics, which is of significance in forensics. © 2015 American Academy of Forensic Sciences.
Decomposition and arthropod succession in Whitehorse, Yukon Territory, Canada.
Bygarski, Katherine; LeBlanc, Helene N
2013-03-01
Forensic arthropod succession patterns are known to vary between regions. However, the northern habitats of the globe have been largely left unstudied. Three pig carcasses were studied outdoors in Whitehorse, Yukon Territory. Adult and immature insects were collected for identification and comparison. The dominant Diptera and Coleoptera species at all carcasses were Protophormia terraneovae (R-D) (Fam: Calliphoridae) and Thanatophilus lapponicus (Herbst) (Fam: Silphidae), respectively. Rate of decomposition, patterns of Diptera and Coleoptera succession, and species dominance were shown to differ from previous studies in temperate regions, particularly as P. terraenovae showed complete dominance among blowfly species. Rate of decomposition through the first four stages was generally slow, and the last stage of decomposition was not observed at any carcass due to time constraints. It is concluded that biogeoclimatic range has a significant effect on insect presence and rate of decomposition, making it an important factor to consider when calculating a postmortem interval. © 2012 American Academy of Forensic Sciences.
MacAulay, Lauren E; Barr, Darryl G; Strongman, Doug B
2009-03-01
Previous studies document characteristics of gunshot wounds shortly after they were inflicted. This study was conducted to determine if the early stages of decomposition obscure or alter the physical surface characteristics of gunshot wounds, thereby affecting the quantity and quality of information retrievable from such evidence. The study was conducted in August and September, 2005 in Nova Scotia, Canada in forested and exposed environments. Recently killed pigs were used as research models and were shot six times each at three different ranges (contact, 2.5 cm, and 1.5 m). Under these test conditions, the gunshot wounds maintained the characteristics unique to each gunshot range and changes that occurred during decomposition were not critical to the interpretation of the evidence. It was concluded that changes due to decomposition under the conditions tested would not affect the collection and interpretation of gunshot wound evidence until the skin was degraded in the late active or advanced decay stage of decomposition.
Intelligent robots for planetary exploration and construction
NASA Technical Reports Server (NTRS)
Albus, James S.
1992-01-01
Robots capable of practical applications in planetary exploration and construction will require realtime sensory-interactive goal-directed control systems. A reference model architecture based on the NIST Real-time Control System (RCS) for real-time intelligent control systems is suggested. RCS partitions the control problem into four basic elements: behavior generation (or task decomposition), world modeling, sensory processing, and value judgment. It clusters these elements into computational nodes that have responsibility for specific subsystems, and arranges these nodes in hierarchical layers such that each layer has characteristic functionality and timing. Planetary exploration robots should have mobility systems that can safely maneuver over rough surfaces at high speeds. Walking machines and wheeled vehicles with dynamic suspensions are candidates. The technology of sensing and sensory processing has progressed to the point where real-time autonomous path planning and obstacle avoidance behavior is feasible. Map-based navigation systems will support long-range mobility goals and plans. Planetary construction robots must have high strength-to-weight ratios for lifting and positioning tools and materials in six degrees-of-freedom over large working volumes. A new generation of cable-suspended Stewart platform devices and inflatable structures are suggested for lifting and positioning materials and structures, as well as for excavation, grading, and manipulating a variety of tools and construction machinery.
Multi-component separation and analysis of bat echolocation calls.
DiCecco, John; Gaudette, Jason E; Simmons, James A
2013-01-01
The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.
Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.
Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H
2012-02-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.
Effects of reward context on feedback processing as indexed by time-frequency analysis.
Watts, Adreanna T M; Bernat, Edward M
2018-05-11
The role of reward context has been investigated as an important factor in feedback processing. Previous work has demonstrated that the amplitude of the feedback negativity (FN) depends on the value of the outcome relative to the range of possible outcomes in a given context, not the objective value of the outcome. However, some research has shown that the FN does not scale with loss magnitude in loss-only contexts, suggesting that some contexts do not show a pattern of context dependence. Methodologically, time-frequency decomposition techniques have proven useful for isolating time-domain ERP activity as separable processes indexed in delta (< 3 Hz) and theta (3-7 Hz). Thus, the current study assessed the role of context in a modified gambling feedback task using time-frequency analysis to better isolate the underlying processes. Results revealed that theta was more context dependent and reflected a binary evaluation of bad versus good outcomes in the gain and even contexts. Delta was more context independent: good outcomes scaled linearly with reward magnitude and good-bad differences scaled with context valence. Our findings reveal that theta and delta are differentially sensitive to context and that context valence may play a critical role in determining how the brain processes feedback. © 2018 Society for Psychophysiological Research.
van Ede, Freek; Maris, Eric
2016-01-01
Oscillatory neuronal activity is implicated in many cognitive functions, and its phase coupling between sensors may reflect networks of communicating neuronal populations. Oscillatory activity is often studied using extracranial recordings and compared between experimental conditions. This is challenging, because there is overlap between sensor-level activity generated by different sources, and this can obscure differential experimental modulations of these sources. Additionally, in extracranial data, sensor-level phase coupling not only reflects communicating populations, but can also be generated by a current dipole, whose sensor-level phase coupling does not reflect source-level interactions. We present a novel method, which is capable of separating and characterizing sources on the basis of their phase coupling patterns as a function of space, frequency and time (trials). Importantly, this method depends on a plausible model of a neurobiological rhythm. We present this model and an accompanying analysis pipeline. Next, we demonstrate our approach, using magnetoencephalographic (MEG) recordings during a cued tactile detection task as a case study. We show that the extracted components have overlapping spatial maps and frequency content, which are difficult to resolve using conventional pairwise measures. Because our decomposition also provides trial loadings, components can be readily contrasted between experimental conditions. Strikingly, we observed heterogeneity in alpha and beta sources with respect to whether their activity was suppressed or enhanced as a function of attention and performance, and this happened both in task relevant and irrelevant regions. This heterogeneity contrasts with the common view that alpha and beta amplitude over sensory areas are always negatively related to attention and performance. PMID:27336159
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
NASA Astrophysics Data System (ADS)
Russell, M. B.; Woodall, C. W.; D'Amato, A. W.; Fraver, S.; Bradford, J. B.
2014-06-01
Forest ecosystems play a critical role in mitigating greenhouse gas emissions. Long-term forest carbon (C) storage is determined by the balance between C fixation into biomass through photosynthesis and C release via decomposition and combustion. Relative to C fixation in biomass, much less is known about C depletion through decomposition of woody debris, particularly under a changing climate. It is assumed that the increased temperatures and longer growing seasons associated with projected climate change will increase the decomposition rates (i.e., more rapid C cycling) of downed woody debris (DWD); however, the magnitude of this increase has not been previously addressed. Using DWD measurements collected from a national forest inventory of the eastern United States, we show that the residence time of DWD may decrease (i.e., more rapid decomposition) by as much as 13% over the next 200 years depending on various future climate change scenarios and forest types. Although existing dynamic global vegetation models account for the decomposition process, they typically do not include the effect of a changing climate on DWD decomposition rates. We expect that an increased understanding of decomposition rates, as presented in this current work, will be needed to adequately quantify the fate of woody detritus in future forests. Furthermore, we hope these results will lead to improved models that incorporate climate change scenarios for depicting future dead wood dynamics, in addition to a traditional emphasis on live tree demographics.
Chapman, Samantha K.; Newman, Gregory S.; Hart, Stephen C.; Schweitzer, Jennifer A.; Koch, George W.
2013-01-01
To what extent microbial community composition can explain variability in ecosystem processes remains an open question in ecology. Microbial decomposer communities can change during litter decomposition due to biotic interactions and shifting substrate availability. Though relative abundance of decomposers may change due to mixing leaf litter, linking these shifts to the non-additive patterns often recorded in mixed species litter decomposition rates has been elusive, and links community composition to ecosystem function. We extracted phospholipid fatty acids (PLFAs) from single species and mixed species leaf litterbags after 10 and 27 months of decomposition in a mixed conifer forest. Total PLFA concentrations were 70% higher on litter mixtures than single litter types after 10 months, but were only 20% higher after 27 months. Similarly, fungal-to-bacterial ratios differed between mixed and single litter types after 10 months of decomposition, but equalized over time. Microbial community composition, as indicated by principal components analyses, differed due to both litter mixing and stage of litter decomposition. PLFA biomarkers a15∶0 and cy17∶0, which indicate gram-positive and gram-negative bacteria respectively, in particular drove these shifts. Total PLFA correlated significantly with single litter mass loss early in decomposition but not at later stages. We conclude that litter mixing alters microbial community development, which can contribute to synergisms in litter decomposition. These findings advance our understanding of how changing forest biodiversity can alter microbial communities and the ecosystem processes they mediate. PMID:23658639
Model-free fMRI group analysis using FENICA.
Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E
2011-03-01
Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
NASA Astrophysics Data System (ADS)
Campbell, John L.; Fontaine, Joseph B.; Donato, Daniel C.
2016-03-01
A key uncertainty concerning the effect of wildfire on carbon dynamics is the rate at which fire-killed biomass (e.g., dead trees) decays and emits carbon to the atmosphere. We used a ground-based approach to compute decomposition of forest biomass killed, but not combusted, in the Biscuit Fire of 2002, an exceptionally large wildfire that burned over 200,000 ha of mixed conifer forest in southwestern Oregon, USA. A combination of federal inventory data and supplementary ground measurements afforded the estimation of fire-caused mortality and subsequent 10 year decomposition for several functionally distinct carbon pools at 180 independent locations in the burn area. Decomposition was highest for fire-killed leaves and fine roots and lowest for large-diameter wood. Decomposition rates varied somewhat among tree species and were only 35% lower for trees still standing than for trees fallen at the time of the fire. We estimate a total of 4.7 Tg C was killed but not combusted in the Biscuit Fire, 85% of which remains 10 years after. Biogenic carbon emissions from fire-killed necromass were estimated to be 1.0, 0.6, and 0.4 Mg C ha-1 yr-1 at 1, 10, and 50 years after the fire, respectively; compared to the one-time pyrogenic emission of nearly 17 Mg C ha-1.
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
Decomposition of the Multistatic Response Matrix and Target Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H
2008-02-14
Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less
Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.
Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino
2017-01-10
In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Applying Behavior-Based Robotics Concepts to Telerobotic Use of Power Tooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noakes, Mark W; Hamel, Dr. William R.
While it has long been recognized that telerobotics has potential advantages to reduce operator fatigue, to permit lower skilled operators to function as if they had higher skill levels, and to protect tools and manipulators from excessive forces during operation, relatively little laboratory research in telerobotics has actually been implemented in fielded systems. Much of this has to do with the complexity of the implementation and its lack of ability to operate in complex unstructured remote systems environments. One possible solution is to approach the tooling task using an adaptation of behavior-based techniques to facilitate task decomposition to a simplermore » perspective and to provide sensor registration to the task target object in the field. An approach derived from behavior-based concepts has been implemented to provide automated tool operation for a teleoperated manipulator system. The generic approach is adaptable to a wide range of typical remote tools used in hot-cell and decontamination and dismantlement-type operations. Two tasks are used in this work to test the validity of the concept. First, a reciprocating saw is used to cut a pipe. The second task is bolt removal from mockup process equipment. This paper explains the technique, its implementation, and covers experimental data, analysis of results, and suggestions for implementation on fielded systems.« less
Johansson, Mikael; Mecklinger, Axel
2003-10-01
The focus of the present paper is a late posterior negative slow wave (LPN) that has frequently been reported in event-related potential (ERP) studies of memory. An overview of these studies suggests that two broad classes of experimental conditions tend to elicit this component: (a) item recognition tasks associated with enhanced action monitoring demands arising from response conflict and (b) memory tasks that require the binding of items with contextual information specifying the study episode. A combined stimulus- and response-locked analysis of data from two studies mapping onto these classes allowed a temporal and functional decomposition of the LPN. While only the LPN observed in the item recognition task could be attributed to the involvement of a posteriorly distributed response-locked error-related negativity (or error negativity; ERN/Ne) occurring immediately after the response, the source-memory task was associated with a stimulus-locked negative slow wave occurring prior and during response execution that was evident when data were matched for response latencies. We argue that the presence of the former reflects action monitoring due to high levels of response conflict, whereas the latter reflects retrieval processes that may act to reconstruct the prior study episode when task-relevant attribute conjunctions are not readily recovered or need continued evaluation.
Zope, Indraneel S.; Yu, Zhong-Zhen
2017-01-01
Metal ions present on smectite clay (montmorillonite) platelets have preferential reactivity towards peroxy/alkoxy groups during polyamide 6 (PA6) thermal decomposition. This changes the decomposition pathway and negatively affects the ignition response of PA6. To restrict these interfacial interactions, high-temperature-resistant polymers such as polyetherimide (PEI) and polyimide (PI) were used to coat clay layers. PEI was deposited on clay by solution-precipitation, whereas PI was deposited through a solution-imidization-precipitation technique before melt blending with PA6. The absence of polymer-clay interfacial interactions has resulted in a similar time-to-ignition of PA6/PEI-clay (133 s) and PA6/PI-clay (139 s) composites as neat PA6 (140 s). On the contrary, PA6 with conventional ammonium-based surfactant modified clay has showed a huge drop in time-to-ignition (81 s), as expected. The experimental evidences provided herein reveal the role of the catalytic activity of clay during the early stages of polymer decomposition. PMID:28800095
Zope, Indraneel S; Dasari, Aravind; Yu, Zhong-Zhen
2017-08-11
Metal ions present on smectite clay (montmorillonite) platelets have preferential reactivity towards peroxy/alkoxy groups during polyamide 6 (PA6) thermal decomposition. This changes the decomposition pathway and negatively affects the ignition response of PA6. To restrict these interfacial interactions, high-temperature-resistant polymers such as polyetherimide (PEI) and polyimide (PI) were used to coat clay layers. PEI was deposited on clay by solution-precipitation, whereas PI was deposited through a solution-imidization-precipitation technique before melt blending with PA6. The absence of polymer-clay interfacial interactions has resulted in a similar time-to-ignition of PA6/PEI-clay (133 s) and PA6/PI-clay (139 s) composites as neat PA6 (140 s). On the contrary, PA6 with conventional ammonium-based surfactant modified clay has showed a huge drop in time-to-ignition (81 s), as expected. The experimental evidences provided herein reveal the role of the catalytic activity of clay during the early stages of polymer decomposition.
Evolutionary and Developmental Modules
Lacquaniti, Francesco; Ivanenko, Yuri P.; d’Avella, Andrea; Zelik, Karl E.; Zago, Myrka
2013-01-01
The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates. PMID:23730285
Evolutionary and developmental modules.
Lacquaniti, Francesco; Ivanenko, Yuri P; d'Avella, Andrea; Zelik, Karl E; Zago, Myrka
2013-01-01
The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates.
NASA Astrophysics Data System (ADS)
Hao, Zhenhua; Cui, Ziqiang; Yue, Shihong; Wang, Huaxiang
2018-06-01
As an important means in electrical impedance tomography (EIT), multi-frequency phase-sensitive demodulation (PSD) can be viewed as a matched filter for measurement signals and as an optimal linear filter in the case of Gaussian-type noise. However, the additive noise usually possesses impulsive noise characteristics, so it is a challenging task to reduce the impulsive noise in multi-frequency PSD effectively. In this paper, an approach for impulsive noise reduction in multi-frequency PSD of EIT is presented. Instead of linear filters, a singular value decomposition filter is employed as the pre-stage filtering module prior to PSD, which has advantages of zero phase shift, little distortion, and a high signal-to-noise ratio (SNR) in digital signal processing. Simulation and experimental results demonstrated that the proposed method can effectively eliminate the influence of impulsive noise in multi-frequency PSD, and it was capable of achieving a higher SNR and smaller demodulation error.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
NASA Astrophysics Data System (ADS)
Greenfield, Margo
Energetic materials play an important role in aeronautics, the weapon industry, and the propellant industry due to their broad applications as explosives and fuels. RDX (1,3,5-trinitrohexahydro-s-triazine), HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine), and CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane) are compounds which contain high energy density. Although RDX and HMX have been studied extensively over the past several decades a complete understanding of their decomposition mechanisms and dynamics is unknown. Time of flight mass spectroscopy (TOFMS) UV photodissociation (ns) experiments of gas phase RDX, HMX, and CL-20 generate the NO molecule as the initial decomposition product. Four different vibronic transitions of the initial decomposition product, the NO molecule, are observed: A2Sigma(upsilon'=0)←X 2pi(upsilon"=0,1,2,3). Simulations of the rovibronic intensities for the A←X transitions demonstrate that NO dissociated from RDX, HMX, and CL-20 is rotationally cold (˜20 K) and vibrationally hot (˜1800 K). Conversely, experiments on the five model systems (nitromethane, dimethylnitramine (DMNA), nitropyrrolidine, nitropiperidine and dinitropiperazine) produce rotationally hot and vibrationally cold spectra. Laser induced fluorescence (LIF) experiments are performed to rule out the possible decomposition product OH, generated along with NO, perhaps from the suggested HONO elimination mechanism. The OH radical is not observed in the fluorescence experiments, indicating the HONO decomposition intermediate is not an important pathway for the excited electronic state decomposition of cyclic nitramines. The NO molecule is also employed to measure the dynamics of the excited state decomposition. A 226 nm, 180 fs light pulse is utilized to photodissociate the gas phase systems. Stable ion states of DMNA and nitropyrrolidine are observed while the energetic materials and remaining model systems present the NO molecule as the only observed product. Pump-probe transients of the resonant A←X (0-0) transition of the NO molecule show a constant signal indicating these materials decompose faster than the time duration of the 226 nm laser light. Calculational results together with the experimental results indicate the energetic materials decompose through an internal conversion to very highly excited (˜5 eV of vibrational energy) vibrational states of their ground electronic state, while the model systems follow an excited electronic state decomposition pathway.
Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kautz, Elizabeth J.; Jana, Saumyadeep; Devaraj, Arun
2017-07-31
This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).
Decomposition rates and termite assemblage composition in semiarid Africa
Schuurman, G.
2005-01-01
Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.
Data-adaptive harmonic spectra and multilayer Stuart-Landau models
NASA Astrophysics Data System (ADS)
Chekroun, Mickaël D.; Kondrashov, Dmitri
2017-09-01
Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.
Evidence for morphological composition in compound words using MEG.
Brooks, Teon L; Cid de Garcia, Daniela
2015-01-01
Psycholinguistic and electrophysiological studies of lexical processing show convergent evidence for morpheme-based lexical access for morphologically complex words that involves early decomposition into their constituent morphemes followed by some combinatorial operation. Considering that both semantically transparent (e.g., sailboat) and semantically opaque (e.g., bootleg) compounds undergo morphological decomposition during the earlier stages of lexical processing, subsequent combinatorial operations should account for the difference in the contribution of the constituent morphemes to the meaning of these different word types. In this study we use magnetoencephalography (MEG) to pinpoint the neural bases of this combinatorial stage in English compound word recognition. MEG data were acquired while participants performed a word naming task in which three word types, transparent compounds (e.g., roadside), opaque compounds (e.g., butterfly), and morphologically simple words (e.g., brothel) were contrasted in a partial-repetition priming paradigm where the word of interest was primed by one of its constituent morphemes. Analysis of onset latency revealed shorter latencies to name compound words than simplex words when primed, further supporting a stage of morphological decomposition in lexical access. An analysis of the associated MEG activity uncovered a region of interest implicated in morphological composition, the Left Anterior Temporal Lobe (LATL). Only transparent compounds showed increased activity in this area from 250 to 470 ms. Previous studies using sentences and phrases have highlighted the role of LATL in performing computations for basic combinatorial operations. Results are in tune with decomposition models for morpheme accessibility early in processing and suggest that semantics play a role in combining the meanings of morphemes when their composition is transparent to the overall word meaning.
Li, Xiaoyan; Holobar, Ales; Gazzoni, Marco; Merletti, Roberto; Rymer, William Zev; Zhou, Ping
2015-05-01
Recent advances in high-density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study, we applied high-density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations poststroke. Surface EMG signals were collected using a 64-channel 2-D electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high-density surface EMG signals and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations poststroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness.
Li, Xiaoyan; Holobar, Aleš; Gazzoni, Marco; Merletti, Roberto; Rymer, William Z.; Zhou, Ping
2014-01-01
Recent advances in high density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study we applied high density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations post-stroke. Surface EMG signals were collected using a 64-channel 2-dimensional electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 N to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high density surface EMG signals, and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (CoV, averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations post-stroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness. PMID:25389239
Gershman, Samuel J.; Pesaran, Bijan; Daw, Nathaniel D.
2009-01-01
Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable, due to the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning – such as prediction error signals for action valuation associated with dopamine and the striatum – can cope with this “curse of dimensionality.” We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and BOLD activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to “divide and conquer” reinforcement learning over high-dimensional action spaces. PMID:19864565
Gershman, Samuel J; Pesaran, Bijan; Daw, Nathaniel D
2009-10-28
Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable because of the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning-such as prediction error signals for action valuation associated with dopamine and the striatum-can cope with this "curse of dimensionality." We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and blood oxygen level-dependent (BOLD) activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to "divide and conquer" reinforcement learning over high-dimensional action spaces.
Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.
Cockle, Diane L; Bell, Lynne S
2015-08-01
Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less
The components of working memory updating: an experimental decomposition and individual differences.
Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
On the hadron mass decomposition
NASA Astrophysics Data System (ADS)
Lorcé, Cédric
2018-02-01
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.
Thermal decomposition of ammonium hexachloroosmate.
Asanova, T I; Kantor, I; Asanov, I P; Korenev, S V; Yusenko, K V
2016-12-07
Structural changes of (NH 4 ) 2 [OsCl 6 ] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH 4 ) 2 [OsCl 6 ] transforms directly to metallic Os without the formation of any crystalline intermediates but through a plateau where no reactions occur. XANES and EXAFS data by means of Multivariate Curve Resolution (MCR) analysis show that thermal decomposition occurs with the formation of an amorphous intermediate {OsCl 4 } x with a possible polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before.
Microbial community assembly and metabolic function during mammalian corpse decomposition.
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-08
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations. Copyright © 2016, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Haijun, E-mail: zhanghaijun@wust.edu.cn; The State Key Laboratory of Refractory and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081; Deng, Xiangong
2016-07-15
Graphical abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) were prepared by using hydrogen sacrificial reduction method, the activity of Rh80Au20 BNPs were about 3.6 times higher than that of Rh NPs. - Highlights: • Rh/Au bimetallic nanoparticles (BNPs) of 3∼5 nm in diameter were prepared. • Activity for H{sub 2}O{sub 2} decomposition of BNPs is 3.6 times higher than that of Rh NPs. • The high activity of BNPs was caused by the existence of charged Rh atoms. • The apparent activation energy for H{sub 2}O{sub 2} decomposition over the BNPs was calculated. - Abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) weremore » prepared by using hydrogen sacrificial reduction method and characterized by UV–vis, XRD, FT-IR, XPS, TEM, HR-TEM and DF-STEM, the effects of composition on their particle sizes and catalytic activities for H{sub 2}O{sub 2} decomposition were also studied. The as-prepared Rh/Au BNPs possessed a high catalytic activity for the H{sub 2}O{sub 2} decomposition, and the activity of the Rh{sub 80}Au{sub 20} BNPs with average size of 2.7 nm were about 3.6 times higher than that of Rh monometallic nanoparticles (MNPs) even the Rh MNPs possess a smaller particle size of 1.7 nm. In contrast, Au MNPs with size of 2.7 nm show no any activity. Density functional theory (DFT) calculation as well as XPS results showed that charged Rh and Au atoms formed via electronic charge transfer effects could be responsible for the high catalytic activity of the BNPs.« less
Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658
Biological decomposition efficiency in different woodland soils.
Herlitzius, H
1983-03-01
The decomposition (meaning disappearance) of different leaf types and artificial leaves made from cellulose hydrate foil was studied in three forests - an alluvial forest (Ulmetum), a beech forest on limestone soil (Melico-Fagetum), and a spruce forest in soil overlying limestone bedrock.Fine, medium, and coarse mesh litter bags of special design were used to investigate the roles of abiotic factors, microorganisms, and meso- and macrofauna in effecting decomposition in the three habitats. Additionally, the experimental design was carefully arranged so as to provide information about the effects on decomposition processes of the duration of exposure and the date or moment of exposure. 1. Exposure of litter samples oor 12 months showed: a) Litter enclosed in fine mesh bags decomposed to some 40-44% of the initial amount placed in each of the three forests. Most of this decomposition can be attributed to abiotic factors and microoganisms. b) Litter placed in medium mesh litter bags reduced by ca. 60% in alluvial forest, ca. 50% in beech forest and ca. 44% in spruce forest. c) Litter enclosed in coarse mesh litter bags was reduced by 71% of the initial weights exposed in alluvial and beech forests; in the spruce forest decomposition was no greater than observed with fine and medium mesh litter bags. Clearly, in spruce forest the macrofauna has little or no part to play in effecting decomposition. 2. Sequential month by month exposure of hazel leaves and cellulose hydrate foil in coarse mesh litter bags in all three forests showed that one month of exposure led to only slight material losses, they did occur smallest between March and May, and largest between June and October/November. 3. Coarse mesh litter bags containing either hazel or artificial leaves of cellulose hydrate foil were exposed to natural decomposition processes in December 1977 and subsampled monthly over a period of one year, this series constituted the From-sequence of experiments. Each of the From-sequence samples removed was immediately replaced by a fresh litter bag which was left in place until December 1978, this series constituted the To-sequence of experiments. The results arising from the designated From- and To-sequences showed: a) During the course of one year hazel leaves decomposed completely in alluvial forest, almost completely in beech forest but to only 50% of the initial value in spruce forest. b) Duration of exposure and not the date of exposure is the major controlling influence on decomposition in alluvial forest, a characteristic reflected in the mirror-image courses of the From- and To-sequences curves with respect to the abscissa or time axis. Conversely the date of exposure and not the duration of exposure is the major controlling influence on decomposition in the spruce forest, a characteristic reflected in the mirror-image courses of the From-and To-sequences with respect to the ordinate or axis of percentage decomposition. c) Leaf powder amendment increased the decomposition rate of the hazel and cellulose hydrate leaves in the spruce forest but had no significant effect on their decomposition rate in alluvial and beech forests. It is concluded from this, and other evidence, that litter amendment by leaf fragments of phytophage frass in sites of low biological decomposition activity (eg. spruce) enhances decomposition processes. d) The time course of hazel leaf decomposition in both alluvial and beech forest is sigmoidal. Three s-phases are distinguished and correspond to the activity of microflora/microfauna, mesofauna/macrofauna, and then microflora/microfauna again. In general, the sigmoidal pattern of the curve can be considered valid for all decomposition processes occurring in terrestrial situations. It is contended that no decomposition (=disappearance) curve actually follows an e-type exponential function. A logarithmic linear regression can be constructed from the sigmoid curve data and although this facilitates inter-system comparisons it does not clearly express the dynamics of decomposition. 4. The course of the curve constructed from information about the standard deviations of means derived from the From- and To-sequence data does reflect the dynamics of litter decomposition. The three s-phases can be recognised and by comparing the actual From-sequence deviation curve with a mirror inversion representation of the To-sequence curve it is possible to determine whether decomposition is primarily controlled by the duration of exposure or the date of exposure. As is the case for hazel leaf decomposition in beech forest intermediate conditions can be readily recognised.
NASA Astrophysics Data System (ADS)
Tobler, M.; White, D. A.; Abbene, M. L.; Burst, S. L.; McCulley, R. L.; Barnes, P. W.
2016-02-01
Decomposition is a crucial component of global biogeochemical cycles that influences the fate and residence time of carbon and nutrients in organic matter pools, yet the processes controlling litter decomposition in coastal marshes are not fully understood. We conducted a series of field studies to examine what role photodegradation, a process driven in part by solar UV radiation (280-400 nm), plays in the decomposition of the standing dead litter of Sagittaria lancifolia and Spartina patens, two common species in marshes of intermediate salinity in southern Louisiana, USA. Results indicate that the exclusion of solar UV significantly altered litter mass loss, but the magnitude and direction of these effects varied depending on species, height of the litter above the water surface and the stage of decomposition. Over one growing season, S. lancifolia litter exposed to ambient solar UV had significantly less mass loss compared to litter exposed to attenuated UV over the initial phase of decomposition (0-5 months; ANOVA P=0.004) then treatment effects switched in the latter phase of the study (5-7 months; ANOVA P<0.001). Similar results were found in S. patens over an 11-month period. UV exposure reduced total C, N and lignin by 24-33% in remaining tissue with treatment differences most pronounced in S. patens. Phospholipid fatty-acid analysis (PFLA) indicated that UV also significantly altered microbial (bacterial) biomass and bacteria:fungi ratios of decomposing litter. These findings, and others, indicate that solar UV can have positive and negative net effects on litter decomposition in marsh plants with inhibition of biotic (microbial) processes occurring early in the decomposition process then shifting to enhancement of decomposition via abiotic (photodegradation) processes later in decomposition. Photodegradation of standing litter represents a potentially significant pathway of C and N loss from these coastal wetland ecosystems.
Warming and Nitrogen Addition Increase Litter Decomposition in a Temperate Meadow Ecosystem
Gong, Shiwei; Guo, Rui; Zhang, Tao; Guo, Jixun
2015-01-01
Background Litter decomposition greatly influences soil structure, nutrient content and carbon sequestration, but how litter decomposition is affected by climate change is still not well understood. Methodology/Principal Findings A field experiment with increased temperature and nitrogen (N) addition was established in April 2007 to examine the effects of experimental warming, N addition and their interaction on litter decomposition in a temperate meadow steppe in northeastern China. Warming, N addition and warming plus N addition reduced the residual mass of L. chinensis litter by 3.78%, 7.51% and 4.53%, respectively, in 2008 and 2009, and by 4.73%, 24.08% and 16.1%, respectively, in 2010. Warming, N addition and warming plus N addition had no effect on the decomposition of P. communis litter in 2008 or 2009, but reduced the residual litter mass by 5.58%, 15.53% and 5.17%, respectively, in 2010. Warming and N addition reduced the cellulose percentage of L. chinensis and P. communis, specifically in 2010. The lignin percentage of L. chinensis and P. communis was reduced by warming but increased by N addition. The C, N and P contents of L. chinensis and P. communis litter increased with time. Warming and N addition reduced the C content and C:N ratios of L. chinensisand P. communis litter, but increased the N and P contents. Significant interactive effects of warming and N addition on litter decomposition were observed (P<0.01). Conclusion/Significance The litter decomposition rate was highly correlated with soil temperature, soil water content and litter quality. Warming and N addition significantly impacted the litter decomposition rate in the Songnen meadow ecosystem, and the effects of warming and N addition on litter decomposition were also influenced by the quality of litter. These results highlight how climate change could alter grassland ecosystem carbon, nitrogen and phosphorus contents in soil by influencing litter decomposition. PMID:25774776
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketusky, E.; Subramanian, K.
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less
Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
2017-07-01
In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.
Miyake, Yuichi; Tokumura, Masahiro; Wang, Qi; Amagai, Takashi; Horii, Yuichi
2017-11-01
Here, we examined the incineration of extruded polystyrene containing hexabromocyclododecane (HBCD) in a pilot-scale incinerator under various combustion temperatures (800-950°C) and flue gas residence times (2-8sec). Rates of HBCD decomposition ranged from 99.996% (800°C, 2sec) to 99.9999% (950°C, 8sec); the decomposition of HBCD, except during the initial stage of combustion (flue gas residence time<2sec), followed a pseudo-first-order kinetics model. An Arrhenius plot revealed that the activation energy and frequency factor of the decomposition of HBCD by combustion were 14.2kJ/mol and 1.69sec -1 , respectively. During combustion, 11 brominated polycyclic aromatic hydrocarbons (BrPAHs) were detected as unintentional by-products. Of the 11 BrPAHs detected, 2-bromoanthracene and 1-bromopyrene were detected at the highest concentrations. The mutagenic and carcinogenic BrPAHs 1,5-dibromoanthracene and 1-bromopyrene were most frequently detected in the flue gases analyzed. The total concentration of BrPAHs exponentially increased (range, 87.8-2,040,000ng/m 3 ) with increasing flue gas residence time. Results from a qualitative analysis using gas chromatography/high-resolution mass spectrometry suggest that bromofluorene and bromopyrene (or fluoranthene) congeners were also produced during the combustion. Copyright © 2017. Published by Elsevier B.V.
Isayev, Olexandr; Gorb, Leonid; Qasim, Mo; Leszczynski, Jerzy
2008-09-04
CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane or HNIW) is a high-energy nitramine explosive. To improve atomistic understanding of the thermal decomposition of CL-20 gas and solid phases, we performed a series of ab initio molecular dynamics simulations. We found that during unimolecular decomposition, unlike other nitramines (e.g., RDX, HMX), CL-20 has only one distinct initial reaction channelhomolysis of the N-NO2 bond. We did not observe any HONO elimination reaction during unimolecular decomposition, whereas the ring-breaking reaction was followed by NO 2 fission. Therefore, in spite of limited sampling, that provides a mostly qualitative picture, we proposed here a scheme of unimolecular decomposition of CL-20. The averaged product population over all trajectories was estimated at four HCN, two to four NO2, two to four NO, one CO, and one OH molecule per one CL-20 molecule. Our simulations provide a detailed description of the chemical processes in the initial stages of thermal decomposition of condensed CL-20, allowing elucidation of key features of such processes as composition of primary reaction products, reaction timing, and Arrhenius behavior of the system. The primary reactions leading to NO2, NO, N 2O, and N2 occur at very early stages. We also estimated potential activation barriers for the formation of NO2, which essentially determines overall decomposition kinetics and effective rate constants for NO2 and N2. The calculated solid-phase decomposition pathways correlate with available condensed-phase experimental data.
Marian, Franca; Sandmann, Dorothee; Krashevska, Valentyna; Maraun, Mark; Scheu, Stefan
2017-08-01
We investigated how altitude affects the decomposition of leaf and root litter in the Andean tropical montane rainforest of southern Ecuador, that is, through changes in the litter quality between altitudes or other site-specific differences in microenvironmental conditions. Leaf litter from three abundant tree species and roots of different diameter from sites at 1,000, 2,000, and 3,000 m were placed in litterbags and incubated for 6, 12, 24, 36, and 48 months. Environmental conditions at the three altitudes and the sampling time were the main factors driving litter decomposition, while origin, and therefore quality of the litter, was of minor importance. At 2,000 and 3,000 m decomposition of litter declined for 12 months reaching a limit value of ~50% of initial and not decomposing further for about 24 months. After 36 months, decomposition commenced at low rates resulting in an average of 37.9% and 44.4% of initial remaining after 48 months. In contrast, at 1,000 m decomposition continued for 48 months until only 10.9% of the initial litter mass remained. Changes in decomposition rates were paralleled by changes in microorganisms with microbial biomass decreasing after 24 months at 2,000 and 3,000 m, while varying little at 1,000 m. The results show that, irrespective of litter origin (1,000, 2,000, 3,000 m) and type (leaves, roots), unfavorable microenvironmental conditions at high altitudes inhibit decomposition processes resulting in the sequestration of carbon in thick organic layers.
Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François
2015-06-01
In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.
Seasonal necrophagous insect community assembly during vertebrate carrion decomposition.
Benbow, M E; Lewis, A J; Tomberlin, J K; Pechal, J L
2013-03-01
Necrophagous invertebrates have been documented to be a predominant driver of vertebrate carrion decomposition; however, very little is understood about the assembly of these communities both within and among seasons. The objective of this study was to evaluate the seasonal differences in insect taxa composition, richness, and diversity on carrion over decomposition with the intention that such data will be useful for refining error estimates in forensic entomology. Sus scrofa (L.) carcasses (n = 3-6, depending on season) were placed in a forested habitat near Xenia, OH, during spring, summer, autumn, and winter. Taxon richness varied substantially among seasons but was generally lower (1-2 taxa) during early decomposition and increased (3-8 taxa) through intermediate stages of decomposition. Autumn and winter showed the highest richness during late decomposition. Overall, taxon richness was higher during active decay for all seasons. While invertebrate community composition was generally consistent among seasons, the relative abundance of five taxa significantly differed across seasons, demonstrating different source communities for colonization depending on the time of year. There were significantly distinct necrophagous insect communities for each stage of decomposition, and between summer and autumn and summer and winter, but the communities were similar between autumn and winter. Calliphoridae represented significant indicator taxa for summer and autumn but replaced by Coleoptera during winter. Here we demonstrated substantial variability in necrophagous communities and assembly on carrion over decomposition and among seasons. Recognizing this variation has important consequences for forensic entomology and future efforts to provide error rates for estimates of the postmortem interval using arthropod succession data as evidence during criminal investigations.
NASA Astrophysics Data System (ADS)
Martínez, S.; Barreiro, J.; Cuesta, E.; Álvarez, B. J.; González, D.
2012-04-01
This paper is focused on the task of elicitation and structuring of knowledge related to selection of inspection resources. The final goal is to obtain an informal model of knowledge oriented to the inspection planning in coordinate measuring machines. In the first tasks, where knowledge is captured, it is necessary to use tools that make easier the analysis and structuring of knowledge, so that rules of selection can be easily stated to configure the inspection resources. In order to store the knowledge a so-called Onto-Process ontology has been developed. This ontology may be of application to diverse processes in manufacturing engineering. This paper describes the decomposition of the ontology in terms of general units of knowledge and others more specific for selection of sensor assemblies in inspection planning with touch sensors.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Study on the decomposition of trace benzene over V2O5-WO3/TiO2-based catalysts in simulated flue gas
Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employe...
Defect inspection using a time-domain mode decomposition technique
NASA Astrophysics Data System (ADS)
Zhu, Jinlong; Goddard, Lynford L.
2018-03-01
In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
NASA Astrophysics Data System (ADS)
Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.
2017-03-01
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Dean T.; Coughlin, D. R.; Williamson, Don L.
Here, the influence of partitioning temperature on microstructural evolution during quenching and partitioning was investigated in a 0.38C-1.54Mn-1.48Si wt.% steel using Mössbauer spectroscopy and transmission electron microscopy. η-carbide formation occurs in the martensite during the quenching, holding, and partitioning steps. More effective carbon partitioning from martensite to austenite was observed at 450 than 400°C, resulting in lower martensite carbon contents, less carbide formation, and greater retained austenite amounts for short partitioning times. Conversely, greater austenite decomposition occurs at 450°C for longer partitioning times. Lastly, cementite forms during austenite decomposition and in the martensite for longer partitioning times at 450°C.
Photocatalytic activity of silicon-based nanoflakes for the decomposition of nitrogen monoxide.
Itahara, Hiroshi; Wu, Xiaoyong; Imagawa, Haruo; Yin, Shu; Kojima, Kazunobu; Chichibu, Shigefusa F; Sato, Tsugio
2017-07-04
The photocatalytic decomposition of nitrogen monoxide (NO) was achieved for the first time using Si-based nanomaterials. Nanocomposite powders composed of Si nanoflakes and metallic particles (Ni and Ni 3 Si) were synthesized using a simple one-pot reaction of layered CaSi 2 and NiCl 2 . The synthesized nanocomposites have a wide optical absorption band from the visible to the ultraviolet. Under the assumption of a direct transition, the photoabsorption behavior is well described and an absorption edge of ca. 1.8 eV is indicated. Conventional Si and SiO powders with indirect absorption edges of 1.1 and 1.4 eV, respectively, exhibit considerably low photocatalytic activities for NO decomposition. In contrast, the synthesized nanocomposites exhibited photocatalytic activities under irradiation with light at wavelengths >290 nm (<4.28 eV). The photocatalytic activities of the nanocomposites were confirmed to be constant and did not degrade with the light irradiation time.
Schindler, Severin; Vollnhals, Florian; Halbig, Christian E; Marbach, Hubertus; Steinrück, Hans-Peter; Papp, Christian; Eigler, Siegfried
2017-01-25
Controlled patterning of graphene is an important task towards device fabrication and thus is the focus of current research activities. Graphene oxide (GO) is a solution-processible precursor of graphene. It can be patterned by thermal processing. However, thermal processing of GO leads to decomposition and CO 2 formation. Alternatively, focused electron beam induced processing (FEBIP) techniques can be used to pattern graphene with high spatial resolution. Based on this approach, we explore FEBIP of GO deposited on SiO 2 . Using oxo-functionalized graphene (oxo-G) with an in-plane lattice defect density of 1% we are able to image the electron beam-induced effects by scanning Raman microscopy for the first time. Depending on electron energy (2-30 keV) and doses (50-800 mC m -2 ) either reduction of GO or formation of permanent lattice defects occurs. This result reflects a step towards controlled FEBIP processing of oxo-G.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Prioritization of Disease Susceptibility Genes Using LSM/SVD.
Gong, Lejun; Yang, Ronggen; Yan, Qin; Sun, Xiao
2013-12-01
Understanding the role of genetics in diseases is one of the most important tasks in the postgenome era. It is generally too expensive and time consuming to perform experimental validation for all candidate genes related to disease. Computational methods play important roles for prioritizing these candidates. Herein, we propose an approach to prioritize disease genes using latent semantic mapping based on singular value decomposition. Our hypothesis is that similar functional genes are likely to cause similar diseases. Measuring the functional similarity between known disease susceptibility genes and unknown genes is to predict new disease susceptibility genes. Taking autism as an instance, the analysis results of the top ten genes prioritized demonstrate they might be autism susceptibility genes, which also indicates our approach could discover new disease susceptibility genes. The novel approach of disease gene prioritization could discover new disease susceptibility genes, and latent disease-gene relations. The prioritized results could also support the interpretive diversity and experimental views as computational evidence for disease researchers.
West, Robert; Braver, Todd
2009-01-01
Current theories are divided as to whether prospective memory (PM) involves primarily sustained processes such as strategic monitoring, or transient processes such as the retrieval of intentions from memory when a relevant cue is encountered. The current study examined the neural correlates of PM using a functional magnetic resonance imaging design that allows for the decomposition of brain activity into sustained and transient components. Performance of the PM task was primarily associated with sustained responses in a network including anterior prefrontal cortex (lateral Brodmann area 10), and these responses were dissociable from sustained responses associated with active maintenance in working memory. Additionally, the sustained responses in anterior prefrontal cortex correlated with faster response times for prospective responses. Prospective cues also elicited selective transient activity in a region of interest along the right middle temporal gyrus. The results support the conclusion that both sustained and transient processes contribute to efficient PM and provide novel constraints on the functional role of anterior PFC in higher-order cognition. PMID:18854581
Wimmer, Klaus; Compte, Albert; Roxin, Alex; Peixoto, Diogo; Renart, Alfonso; de la Rocha, Jaime
2015-01-01
Neuronal variability in sensory cortex predicts perceptual decisions. This relationship, termed choice probability (CP), can arise from sensory variability biasing behaviour and from top-down signals reflecting behaviour. To investigate the interaction of these mechanisms during the decision-making process, we use a hierarchical network model composed of reciprocally connected sensory and integration circuits. Consistent with monkey behaviour in a fixed-duration motion discrimination task, the model integrates sensory evidence transiently, giving rise to a decaying bottom-up CP component. However, the dynamics of the hierarchical loop recruits a concurrently rising top-down component, resulting in sustained CP. We compute the CP time-course of neurons in the medial temporal area (MT) and find an early transient component and a separate late contribution reflecting decision build-up. The stability of individual CPs and the dynamics of noise correlations further support this decomposition. Our model provides a unified understanding of the circuit dynamics linking neural and behavioural variability. PMID:25649611
Linked independent component analysis for multimodal data fusion.
Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W
2011-02-01
In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.
High-temperature catalyst for catalytic combustion and decomposition
NASA Technical Reports Server (NTRS)
Mays, Jeffrey A. (Inventor); Lohner, Kevin A. (Inventor); Sevener, Kathleen M. (Inventor); Jensen, Jeff J. (Inventor)
2005-01-01
A robust, high temperature mixed metal oxide catalyst for propellant composition, including high concentration hydrogen peroxide, and catalytic combustion, including methane air mixtures. The uses include target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The catalyst system requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. Start-up transients of less than 1 second have been demonstrated with catalyst bed and propellant temperatures as low as 50 degrees Fahrenheit. The catalyst system has consistently demonstrated high decomposition effeciency, extremely low decomposition roughness, and long operating life on multiple test particles.
On the decomposition of synchronous state mechines using sequence invariant state machines
NASA Technical Reports Server (NTRS)
Hebbalalu, K.; Whitaker, S.; Cameron, K.
1992-01-01
This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.
Ogienko, Andrey G; Tkacz, Marek; Manakov, Andrey Yu; Lipkowski, Janusz
2007-11-08
Pressure-temperature (P-T) conditions of the decomposition reaction of the structure H high-pressure methane hydrate to the cubic structure I methane hydrate and fluid methane were studied with a piston-cylinder apparatus at room temperature. For the first time, volume changes accompanying this reaction were determined. With the use of the Clausius-Clapeyron equation the enthalpies of the decomposition reaction of the structure H high-pressure methane hydrate to the cubic structure I methane hydrate and fluid methane have been calculated.
Behavior of decomposition of rifampicin in the presence of isoniazid in the pH range 1-3.
Sankar, R; Sharda, Nishi; Singh, Saranjit
2003-08-01
The extent of decomposition of rifampicin in the presence of isoniazid was determined in the pH range 1-3 at 37 degrees C in 50 min, the mean stomach residence time. With increase in pH, the degradation initially increased from pH 1 to 2 and then decreased, resulting in a bell-shaped pH-decomposition profile. This showed that rifampicin degraded in the presence of isoniazid to a higher extent at pH 2, the maximum pH in the fasting condition, under which antituberculosis fixed-dose combination (FDC) products are administered. At this pH and in 50 min, rifampicin decomposed by approximately 34%, while the fall of isoniazid was 10%. The extent of decomposition for the two drugs was also determined in marketed formulations, and the values ranged between 13-35% and 4-11%, respectively. The extents of decomposition at stomach residence times of 15 min and 3 h were 11.94% and 62.57%, respectively, for rifampicin and 4.78% and 11.12%, respectively, for isoniazid. The results show that quite an extensive loss of rifampicin and isoniazid can occur as a result of interaction between them in fasting pH conditions. This emphasizes that antituberculosis FDC formulations, which contain both drugs, should be designed in a manner that the interaction of the two drugs is prevented when the formulations are administered on an empty stomach.
Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A
2017-03-01
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Efficient material decomposition method for dual-energy X-ray cargo inspection system
NASA Astrophysics Data System (ADS)
Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong
2018-03-01
Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.
Evaluation of the Dornier Gmbh interactive grid generation system
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1989-01-01
An interactive grid generation program, INGRID, is investigated and evaluated. A description of the task and work performed, a description and evaluation of INGRID, and a discussion of the possibilities for bringing INGRID into the NASA and Numerical Aerodynamic Simulator (NAS) computing environments is included. The interactive grid generation program was found to be a viable approach for grid generation and determined that it could be converted to work in the NAS environment but that INGRID does not solve the fundamentally hard problems associated with grid generation, specifically, domain decomposition.
1975-07-25
H WISE D M GOLDEN B J WOOD 0. PERFORMING ORGANIZATION NAME AND ADDRESS IO. -’i)GRAM -LLUENT PROJECT, TASK AREA A WORK UNIT NUMBERSSTANFORD RESEARCH...are retained by the catalyst after exposure to reactants. In these experiments the catalyst was placed in a microreactor apparatus, and a helium...intermediates involved in the reaction are adsorbed on the surface. Following is such a general scheme: k a N2H4(gas) +--4 X + Y (1)2 4 k s s d k 1 X - Pr.ducts
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.
NASA Technical Reports Server (NTRS)
Wade, T. O.
1984-01-01
Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.
Catalytic Decomposition of Hydroxylammonium Nitrate Ionic Liquid: Enhancement of NO Formation.
Chambreau, Steven D; Popolan-Vaida, Denisia M; Vaghjiani, Ghanshyam L; Leone, Stephen R
2017-05-18
Hydroxylammonium nitrate (HAN) is a promising candidate to replace highly toxic hydrazine in monopropellant thruster space applications. The reactivity of HAN aerosols on heated copper and iridium targets was investigated using tunable vacuum ultraviolet photoionization time-of-flight aerosol mass spectrometry. The reaction products were identified by their mass-to-charge ratios and their ionization energies. Products include NH 3 , H 2 O, NO, hydroxylamine (HA), HNO 3 , and a small amount of NO 2 at high temperature. No N 2 O was detected under these experimental conditions, despite the fact that N 2 O is one of the expected products according to the generally accepted thermal decomposition mechanism of HAN. Upon introduction of iridium catalyst, a significant enhancement of the NO/HA ratio was observed. This observation indicates that the formation of NO via decomposition of HA is an important pathway in the catalytic decomposition of HAN.
Jia, Yu-Hui; Yang, Kai-Xiang; Chen, Shi-Lu; Huang, Mu-Hua
2018-01-11
Nitrogen-rich compounds such as tetrazoles are widely used as candidates in gas-generating agents. However, the details of the differentiation of the two isomers of disubstituted tetrazoles are rarely studied, which is very important information for designing advanced materials based on tetrazoles. In this article, pairs of 2,5- and 1,5-disubstituted tetrazoles were carefully designed and prepared for study on their thermal decomposition behavior. Also, the substitution fashion of 2,5- and 1,5- and the substituents at C-5 position were found to affect the endothermic or exothermic properties. This is for the first time to the best of our knowledge that the thermal decomposition properties of different tetrazoles could be tuned by substitution ways and substitute groups, which could be used as a useful platform to design advanced materials for temperature-dependent rockets. The aza-Claisen rearrangement was proposed to understand the endothermic decomposition behavior.
NASA Astrophysics Data System (ADS)
Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan
2016-03-01
Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.
Teodoro, Douglas; Lovis, Christian
2013-01-01
Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Statistical Feature Extraction for Artifact Removal from Concurrent fMRI-EEG Recordings
Liu, Zhongming; de Zwart, Jacco A.; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H.
2011-01-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphases are directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use a channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable by the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. PMID:22036675