A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Planning paths through a spatial hierarchy - Eliminating stair-stepping effects
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1989-01-01
Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.
NASA Technical Reports Server (NTRS)
Bradshaw, G. A.
1995-01-01
There has been an increased interest in the quantification of pattern in ecological systems over the past years. This interest is motivated by the desire to construct valid models which extend across many scales. Spatial methods must quantify pattern, discriminate types of pattern, and relate hierarchical phenomena across scales. Wavelet analysis is introduced as a method to identify spatial structure in ecological transect data. The main advantage of the wavelet transform over other methods is its ability to preserve and display hierarchical information while allowing for pattern decomposition. Two applications of wavelet analysis are illustrated, as a means to: (1) quantify known spatial patterns in Douglas-fir forests at several scales, and (2) construct spatially-explicit hypotheses regarding pattern generating mechanisms. Application of the wavelet variance, derived from the wavelet transform, is developed for forest ecosystem analysis to obtain additional insight into spatially-explicit data. Specifically, the resolution capabilities of the wavelet variance are compared to the semi-variogram and Fourier power spectra for the description of spatial data using a set of one-dimensional stationary and non-stationary processes. The wavelet cross-covariance function is derived from the wavelet transform and introduced as a alternative method for the analysis of multivariate spatial data of understory vegetation and canopy in Douglas-fir forests of the western Cascades of Oregon.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
A unifying model of concurrent spatial and temporal modularity in muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2014-02-01
Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.
ERIC Educational Resources Information Center
Qasem, Mousa; Foote, Rebecca
2010-01-01
This study tested the predictions of the revised hierarchical (RHM) and morphological decomposition (MDM) models with Arabic-English bilinguals. The RHM (Kroll & Stewart, 1994) predicts that the amount of activation of first language translation equivalents is negatively correlated with second language (L2) proficiency. The MDM (Frost, Forster, &…
Implementation of Hybrid V-Cycle Multilevel Methods for Mixed Finite Element Systems with Penalty
NASA Technical Reports Server (NTRS)
Lai, Chen-Yao G.
1996-01-01
The goal of this paper is the implementation of hybrid V-cycle hierarchical multilevel methods for the indefinite discrete systems which arise when a mixed finite element approximation is used to solve elliptic boundary value problems. By introducing a penalty parameter, the perturbed indefinite system can be reduced to a symmetric positive definite system containing the small penalty parameter for the velocity unknown alone. We stabilize the hierarchical spatial decomposition approach proposed by Cai, Goldstein, and Pasciak for the reduced system. We demonstrate that the relative condition number of the preconditioner is bounded uniformly with respect to the penalty parameter, the number of levels and possible jumps of the coefficients as long as they occur only across the edges of the coarsest elements.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Some practicable applications of quadtree data structures/representation in astronomy
NASA Technical Reports Server (NTRS)
Pasztor, L.
1992-01-01
Development of quadtree as hierarchical data structuring technique for representing spatial data (like points, regions, surfaces, lines, curves, volumes, etc.) has been motivated to a large extent by storage requirements of images, maps, and other multidimensional (spatially structured) data. For many spatial algorithms, time-efficiency of quadtrees in terms of execution may be as important as their space-efficiency concerning storage conditions. Briefly, the quadtree is a class of hierarchical data structures which is based on the recursive partition of a square region into quadrants and sub-quadrants until a predefined limit. Beyond the wide applicability of quadtrees in image processing, spatial information analysis, and building digital databases (processes becoming ordinary for the astronomical community), there may be numerous further applications in astronomy. Some of these practicable applications based on quadtree representation of astronomical data are presented and suggested for further considerations. Examples are shown for use of point as well as region quadtrees. Statistics of different leaf and non-leaf nodes (homogeneous and heterogeneous sub-quadrants respectively) at different levels may provide useful information on spatial structure of astronomical data in question. By altering the principle guiding the decomposition process, different types of spatial data may be focused on. Finally, a sampling method based on quadtree representation of an image is proposed which may prove to be efficient in the elaboration of sampling strategy in a region where observations were carried out previously either with different resolution or/and in different bands.
Chen, Jin; He, Simin; Huang, Bing; Wu, Peng; Qiao, Zhiqiang; Wang, Jun; Zhang, Liyuan; Yang, Guangcheng; Huang, Hui
2017-03-29
High energy and low signature properties are the future trend of solid propellant development. As a new and promising oxidizer, hexanitrohexaazaisowurtzitane (CL-20) is expected to replace the conventional oxidizer ammonium perchlorate to reach above goals. However, the high pressure exponent of CL-20 hinders its application in solid propellants so that the development of effective catalysts to improve the thermal decomposition properties of CL-20 still remains challenging. Here, 3D hierarchically ordered porous carbon (3D HOPC) is presented as a catalyst for the thermal decomposition of CL-20 via synthesizing a series of nanostructured CL-20/HOPC composites. In these nanocomposites, CL-20 is homogeneously space-confined into the 3D HOPC scaffold as nanocrystals 9.2-26.5 nm in diameter. The effect of the pore textural parameters and surface modification of 3D HOPC as well as CL-20 loading amount on the thermal decomposition of CL-20 is discussed. A significant improvement of the thermal decomposition properties of CL-20 is achieved with remarkable decrease in decomposition peak temperature (from 247.0 to 174.8 °C) and activation energy (from 165.5 to 115.3 kJ/mol). The exceptional performance of 3D HOPC could be attributed to its well-connected 3D hierarchically ordered porous structure, high surface area, and the confined CL-20 nanocrystals. This work clearly demonstrates that 3D HOPC is a superior catalyst for CL-20 thermal decomposition and opens new potential for further applications of CL-20 in solid propellants.
Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-01-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
Architecture of the parallel hierarchical network for fast image recognition
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu
2009-01-01
Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.
Egri-Nagy, Attila; Nehaniv, Chrystopher L
2008-01-01
Beyond complexity measures, sometimes it is worthwhile in addition to investigate how complexity changes structurally, especially in artificial systems where we have complete knowledge about the evolutionary process. Hierarchical decomposition is a useful way of assessing structural complexity changes of organisms modeled as automata, and we show how recently developed computational tools can be used for this purpose, by computing holonomy decompositions and holonomy complexity. To gain insight into the evolution of complexity, we investigate the smoothness of the landscape structure of complexity under minimal transitions. As a proof of concept, we illustrate how the hierarchical complexity analysis reveals symmetries and irreversible structure in biological networks by applying the methods to the lac operon mechanism in the genetic regulatory network of Escherichia coli.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
NASA Astrophysics Data System (ADS)
Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.
2012-03-01
Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
NASA Astrophysics Data System (ADS)
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, YiJing
2011-06-01
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)], 10.1063/1.3484491. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
Hierarchical Diagnosis of Vocal Fold Disorders
NASA Astrophysics Data System (ADS)
Nikkhah-Bahrami, Mansour; Ahmadi-Noubari, Hossein; Seyed Aghazadeh, Babak; Khadivi Heris, Hossein
This paper explores the use of hierarchical structure for diagnosis of vocal fold disorders. The hierarchical structure is initially used to train different second-level classifiers. At the first level normal and pathological signals have been distinguished. Next, pathological signals have been classified into neurogenic and organic vocal fold disorders. At the final level, vocal fold nodules have been distinguished from polyps in organic disorders category. For feature selection at each level of hierarchy, the reconstructed signal at each wavelet packet decomposition sub-band in 5 levels of decomposition with mother wavelet of (db10) is used to extract the nonlinear features of self-similarity and approximate entropy. Also, wavelet packet coefficients are used to measure energy and Shannon entropy features at different spectral sub-bands. Davies-Bouldin criterion has been employed to find the most discriminant features. Finally, support vector machines have been adopted as classifiers at each level of hierarchy resulting in the diagnosis accuracy of 92%.
Proof of a new colour decomposition for QCD amplitudes
Melia, Tom
2015-12-16
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Proof of a new colour decomposition for QCD amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melia, Tom
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
Geometrical and topological issues in octree based automatic meshing
NASA Technical Reports Server (NTRS)
Saxena, Mukul; Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
Octree based automatic meshing from CSG models
NASA Technical Reports Server (NTRS)
Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is emphasized. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary respresentation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractors. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seshadhri, Comandur; Pinar, Ali; Sariyuce, Ahmet Erdem
Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account formore » overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.« less
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
2007-11-04
hierarchical decomposition of images 13 original f uRO vRO + 128 uλ0 ∑1 i=0 uλi ∑3 i=0 uλi ∑5 i=0 uλi ∑7 i=0 uλi ∑9 i=0 uλi ∑11 i=0 uλi ∑13 i=0 uλi...restoration uRO , vRO = f − uRO + 128, rmse=0.1066. RO parameters: λ = 2000, h = 1, 4t = 0.1. 2nd to last rows, left to right: hierarchical recovery from...for given w ∈ 14 Eitan Tadmor, Suzanne Nezzar & Luminita Vese f uRO vRO + 128 uλ0 ∑1 i=0 uλi ∑2 i=0 uλi ∑3 i=0 uλi ∑4 i=0 uλi ∑5 i=0 uλi vλ5 Figure
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
Hierarchical clustering using correlation metric and spatial continuity constraint
Stork, Christopher L.; Brewer, Luke N.
2012-10-02
Large data sets are analyzed by hierarchical clustering using correlation as a similarity measure. This provides results that are superior to those obtained using a Euclidean distance similarity measure. A spatial continuity constraint may be applied in hierarchical clustering analysis of images.
Accelerated decomposition techniques for large discounted Markov decision processes
NASA Astrophysics Data System (ADS)
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
Design of a structural and functional hierarchy for planning and control of telerobotic systems
NASA Technical Reports Server (NTRS)
Acar, Levent; Ozguner, Umit
1989-01-01
Hierarchical structures offer numerous advantages over conventional structures for the control of telerobotic systems. A hierarchically organized system can be controlled via undetailed task assignments and can easily adapt to changing circumstances. The distributed and modular structure of these systems also enables fast response needed in most telerobotic applications. On the other hand, most of the hierarchical structures proposed in the literature are based on functional properties of a system. These structures work best for a few given functions of a large class of systems. In telerobotic applications, all functions of a single system needed to be explored. This approach requires a hierarchical organization based on physical properties of a system and such a hierarchical organization is introduced. The decomposition, organization, and control of the hierarchical structure are considered, and a system with two robot arms and a camera is presented.
Heuristic decomposition for non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; Hajela, P.
1991-01-01
Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.
Generic interpreters and microprocessor verification
NASA Technical Reports Server (NTRS)
Windley, Phillip J.
1990-01-01
The following topics are covered in viewgraph form: (1) generic interpreters; (2) Viper microprocessors; (3) microprocessor verification; (4) determining correctness; (5) hierarchical decomposition; (6) interpreter theory; (7) AVM-1; (8) phase-level specification; and future work.
Zhang, Mingyi; Shao, Changlu; Guo, Zengcai; Zhang, Zhenyi; Mu, Jingbo; Zhang, Peng; Cao, Tieping; Liu, Yichun
2011-07-01
The hierarchical tetranitro copper phthalocyanine (TNCuPc) hollow spheres were fabricated by a simple solvothermal method. The formation mechanism was proposed based on the evolution of morphology as a function of solvothermal time, which involved the initial formation of nanoparticles followed by their self-aggregation to microspheres and transformation into hierarchical hollow spheres by Ostwald ripening. Furthermore, the hierarchical TNCuPc hollow spheres exhibited high adsorption capacity and excellent simultaneously visible-light-driven photocatalytic performance for Rhodamine B (RB) under visible light. A possible mechanism for the "aqueous-solid phase transfer and in situ photocatalysis" was suggested. Repetitive tests showed that the hierarchical TNCuPc hollow spheres maintained high catalytic activity over several cycles, and it had a better regeneration capability under mild conditions.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring
Carlos Carroll; Devin S. Johnson; Jeffrey R. Dunk; William J. Zielinski
2010-01-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their dataâs spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and...
ERIC Educational Resources Information Center
Xu, Chang; LeFevre, Jo-Anne
2016-01-01
Are there differential benefits of training sequential number knowledge versus spatial skills for children's numerical and spatial performance? Three- to five-year-old children (N = 84) participated in 1 session of either sequential training (e.g., what comes before and after the number 5?) or non-numerical spatial training (i.e., decomposition of…
Knowledge-based approach to system integration
NASA Technical Reports Server (NTRS)
Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.
1988-01-01
To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.
NASA Astrophysics Data System (ADS)
Li, Gang; Bai, Weiyang
2018-04-01
Hierarchical flower-like cobalt tetroxide (Co3O4) was successfully synthesized via a facile precipitation method in combination with heat treatment of the cobalt oxalate precursor. The samples were systematically characterized by thermo gravimetric analysis and derivative thermo gravimetric analysis (TGA-DTG), X-ray powder diffraction (XRD), field-emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM) and N2 adsorption-desorption measurements. The results indicate that the as-fabricated Co3O4 exhibits uniform flower-like morphologies with diameters of 8-12 μm, which are constructed by one-dimensional nanowires. Furthermore, catalytic effect of this hierarchical porous Co3O4 on ammonium perchlorate (AP) pyrolysis was investigated using differential scanning calorimetry (DSC) techniques. It is found that the pyrolysis temperature of AP shifts 142 °C downward with a 2 wt% addition content of Co3O4. Meanwhile, the addition of Co3O4 results in a dramatic reduction of the apparent activation energy of AP pyrolysis from 216 kJ mol-1 to 152 kJ mol-1, determined by the Kissinger correlation. The results endorse this material as a potential catalyst in AP decomposition.
A hierarchical approach to forest landscape pattern characterization.
Wang, Jialing; Yang, Xiaojun
2012-01-01
Landscape spatial patterns have increasingly been considered to be essential for environmental planning and resources management. In this study, we proposed a hierarchical approach for landscape classification and evaluation by characterizing landscape spatial patterns across different hierarchical levels. The case study site is the Red Hills region of northern Florida and southwestern Georgia, well known for its biodiversity, historic resources, and scenic beauty. We used one Landsat Enhanced Thematic Mapper image to extract land-use/-cover information. Then, we employed principal-component analysis to help identify key class-level landscape metrics for forests at different hierarchical levels, namely, open pine, upland pine, and forest as a whole. We found that the key class-level landscape metrics varied across different hierarchical levels. Compared with forest as a whole, open pine forest is much more fragmented. The landscape metric, such as CONTIG_MN, which measures whether pine patches are contiguous or not, is more important to characterize the spatial pattern of pine forest than to forest as a whole. This suggests that different metric sets should be used to characterize landscape patterns at different hierarchical levels. We further used these key metrics, along with the total class area, to classify and evaluate subwatersheds through cluster analysis. This study demonstrates a promising approach that can be used to integrate spatial patterns and processes for hierarchical forest landscape planning and management.
Modular and hierarchical structure of social contact networks
NASA Astrophysics Data System (ADS)
Ge, Yuanzheng; Song, Zhichao; Qiu, Xiaogang; Song, Hongbin; Wang, Yong
2013-10-01
Social contact networks exhibit overlapping qualities of communities, hierarchical structure and spatial-correlated nature. We propose a mixing pattern of modular and growing hierarchical structures to reconstruct social contact networks by using an individual’s geospatial distribution information in the real world. The hierarchical structure of social contact networks is defined based on the spatial distance between individuals, and edges among individuals are added in turn from the modular layer to the highest layer. It is a gradual process to construct the hierarchical structure: from the basic modular model up to the global network. The proposed model not only shows hierarchically increasing degree distribution and large clustering coefficients in communities, but also exhibits spatial clustering features of individual distributions. As an evaluation of the method, we reconstruct a hierarchical contact network based on the investigation data of a university. Transmission experiments of influenza H1N1 are carried out on the generated social contact networks, and results show that the constructed network is efficient to reproduce the dynamic process of an outbreak and evaluate interventions. The reproduced spread process exhibits that the spatial clustering of infection is accordant with the clustering of network topology. Moreover, the effect of individual topological character on the spread of influenza is analyzed, and the experiment results indicate that the spread is limited by individual daily contact patterns and local clustering topology rather than individual degree.
Outline for a theory of intelligence
NASA Technical Reports Server (NTRS)
Albus, James S.
1991-01-01
Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: (1) control bandwidth decreases about an order of magnitude at each higher level, (2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, (3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and (4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.
Nanoclusters first: a hierarchical phase transformation in a novel Mg alloy
NASA Astrophysics Data System (ADS)
Okuda, Hiroshi; Yamasaki, Michiaki; Kawamura, Yoshihito; Tabuchi, Masao; Kimizuka, Hajime
2015-09-01
The Mg-Y-Zn ternary alloy system contains a series of novel structures known as long-period stacking ordered (LPSO) structures. The formation process and its key concept from a viewpoint of phase transition are not yet clear. The current study reveals that the phase transformation process is not a traditional spinodal decomposition or structural transformation but, rather a novel hierarchical phase transformation. In this transformation, clustering occurs first, and the spatial rearrangement of the clusters induce a secondary phase transformation that eventually lead to two-dimensional ordering of the clusters. The formation process was examined using in situ synchrotron radiation small-angle X-ray scattering (SAXS). Rapid quenching from liquid alloy into thin ribbons yielded strongly supersaturated amorphous samples. The samples were heated at a constant rate of 10 K/min. and the scattering patterns were acquired. The SAXS analysis indicated that small clusters grew to sizes of 0.2 nm after they crystallized. The clusters distributed randomly in space grew and eventually transformed into a microstructure with two well-defined cluster-cluster distances, one for the segregation periodicity of LPSO and the other for the in-plane ordering in segregated layer. This transformation into the LPSO structure concomitantly introduces the periodical stacking fault required for the 18R structures.
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
Parallelization of PANDA discrete ordinates code using spatial decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.
2006-07-01
We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
The organisation of spatial and temporal relations in memory.
Rondina, Renante; Curtiss, Kaitlin; Meltzer, Jed A; Barense, Morgan D; Ryan, Jennifer D
2017-04-01
Episodic memories are comprised of details of "where" and "when"; spatial and temporal relations, respectively. However, evidence from behavioural, neuropsychological, and neuroimaging studies has provided mixed interpretations about how memories for spatial and temporal relations are organised-they may be hierarchical, fully interactive, or independent. In the current study, we examined the interaction of memory for spatial and temporal relations. Using explicit reports and eye-tracking, we assessed younger and older adults' memory for spatial and temporal relations of objects that were presented singly across time in unique spatial locations. Explicit change detection of spatial relations was affected by a change in temporal relations, but explicit change detection of temporal relations was not affected by a change in spatial relations. Younger and older adults showed eye movement evidence of incidental memory for temporal relations, but only younger adults showed eye movement evidence of incidental memory for spatial relations. Together, these findings point towards a hierarchical organisation of relational memory. The implications of these findings are discussed in the context of the neural mechanisms that may support such a hierarchical organisation of memory.
Sato, Naoyuki; Yamaguchi, Yoko
2009-06-01
The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.
We introduce a hierarchical optimization framework for spatially targeting green infrastructure (GI) incentive policies in order to meet objectives related to cost and environmental effectiveness. The framework explicitly simulates the interaction between multiple levels of polic...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Haiqing; Liu, Xiaoyan; Huang, Jianguo, E-mail: jghuang@zju.edu.cn
Graphical abstract: Bio-inspired, tubular structured hierarchical mesoporous titania material with high photocatalytic activity under UV light was fabricated employing natural cellulosic substance (cotton) as hard template and cetyltrimethylammonium bromide (CTAB) surfactant as soft template using a one-pot sol-gel method. Highlights: {yields} Tubular structured mesoporous titania material was fabricated by sol-gel method. {yields} The titania material faithfully recorded the hierarchical structure of the template substrate (cotton). {yields} The titania material exhibited high photocatalytic activity in decomposition of methylene blue. -- Abstract: Bio-inspired, tubular structured hierarchical mesoporous titania material was designed and fabricated employing natural cellulosic substance (cotton) as hard template andmore » cetyltrimethylammonium bromide (CTAB) surfactant as soft template by one-pot sol-gel method. The tubular structured hierarchical mesoporous titania material processes large specific surface area (40.23 m{sup 2}/g) and shows high photocatalytic activity in the photodegradation of methylene blue under UV light irradiation.« less
A spatial analysis of hierarchical waste transport structures under growing demand.
Tanguy, Audrey; Glaus, Mathias; Laforest, Valérie; Villot, Jonathan; Hausler, Robert
2016-10-01
The design of waste management systems rarely accounts for the spatio-temporal evolution of the demand. However, recent studies suggest that this evolution affects the planning of waste management activities like the choice and location of treatment facilities. As a result, the transport structure could also be affected by these changes. The objective of this paper is to study the influence of the spatio-temporal evolution of the demand on the strategic planning of a waste transport structure. More particularly this study aims at evaluating the effect of varying spatial parameters on the economic performance of hierarchical structures (with one transfer station). To this end, three consecutive generations of three different spatial distributions were tested for hierarchical and non-hierarchical transport structures based on costs minimization. Results showed that a hierarchical structure is economically viable for large and clustered spatial distributions. The distance parameter was decisive but the loading ratio of trucks and the formation of clusters of sources also impacted the attractiveness of the transfer station. Thus the territories' morphology should influence strategies as regards to the installation of transfer stations. The use of spatial-explicit tools such as the transport model presented in this work that take into account the territory's evolution are needed to help waste managers in the strategic planning of waste transport structures. © The Author(s) 2016.
Fei-Hai Yu; Martin Schutz; Deborah S. Page-Dumroese; Bertil O. Krusi; Jakob Schneller; Otto Wildi; Anita C. Risch
2011-01-01
Tussocks of graminoids can induce spatial heterogeneity in soil properties in dry areas with discontinuous vegetation cover, but little is known about the situation in areas with continuous vegetation and no study has tested whether tussocks can induce spatial heterogeneity in litter decomposition. In a subalpine grassland in the Central Alps where vegetation cover is...
Exploring Galaxy Formation and Evolution via Structural Decomposition
NASA Astrophysics Data System (ADS)
Kelvin, Lee; Driver, Simon; Robotham, Aaron; Hill, David; Cameron, Ewan
2010-06-01
The Galaxy And Mass Assembly (GAMA) structural decomposition pipeline (GAMA-SIGMA Structural Investigation of Galaxies via Model Analysis) will provide multi-component information for a sample of ~12,000 galaxies across 9 bands ranging from near-UV to near-IR. This will allow the relationship between structural properties and broadband, optical-to-near-IR, spectral energy distributions of bulge, bar, and disk components to be explored, revealing clues as to the history of baryonic mass assembly within a hierarchical clustering framework. Data is initially taken from the SDSS & UKIDSS-LAS surveys to test the robustness of our automated decomposition pipeline. This will eventually be replaced with the forthcoming higher-resolution VST & VISTA surveys data, expanding the sample to ~30,000 galaxies.
Ray tracing a three-dimensional scene using a hierarchical data structure
Wald, Ingo; Boulos, Solomon; Shirley, Peter
2012-09-04
Ray tracing a three-dimensional scene made up of geometric primitives that are spatially partitioned into a hierarchical data structure. One example embodiment is a method for ray tracing a three-dimensional scene made up of geometric primitives that are spatially partitioned into a hierarchical data structure. In this example embodiment, the hierarchical data structure includes at least a parent node and a corresponding plurality of child nodes. The method includes a first act of determining that a first active ray in the packet hits the parent node and a second act of descending to each of the plurality of child nodes.
NASA Astrophysics Data System (ADS)
Liu, Ruiping; Ren, Feng; Yang, Jinlin; Su, Weiming; Sun, Zhiming; Zhang, Lei; Wang, Chang-an
2016-03-01
Hierarchically porous hybrid TiO2 hollow spheres were solvothermally synthesized successfully by using tetrabutyl titanate as titanium precursor and hydrated metal sulfates as soft templates. The as-prepared TiO2 spheres with hierarchically pore structures and high specific surface area and pore volume consisted of highly crystallized anatase TiO2 nanocrystals hybridized with a small amount of metal oxide from the hydrated sulfate. The proposed hydrated-sulfate assisted solvothermal (HAS) synthesis strategy was demonstrated to be widely applicable to various systems. Evaluation of the hybrid TiO2 hollow spheres for the photo-decomposition of methyl orange (MO) under visible-light irradiation revealed that they exhibited excellent photocatalytic activity and durability.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Task Decomposition Module For Telerobot Trajectory Generation
NASA Astrophysics Data System (ADS)
Wavering, Albert J.; Lumia, Ron
1988-10-01
A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.
The Variance of Intraclass Correlations in Three- and Four-Level Models
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
The MIL-88A-Derived Fe3O4-Carbon Hierarchical Nanocomposites for Electrochemical Sensing
Wang, Li; Zhang, Yayun; Li, Xia; Xie, Yingzhen; He, Juan; Yu, Jie; Song, Yonghai
2015-01-01
Metal or metal oxides/carbon nanocomposites with hierarchical superstructures have become one of the most promising functional materials in sensor, catalysis, energy conversion, etc. In this work, novel hierarchical Fe3O4/carbon superstructures have been fabricated based on metal-organic frameworks (MOFs)-derived method. Three kinds of Fe-MOFs (MIL-88A) with different morphologies were prepared beforehand as templates, and then pyrolyzed to fabricate the corresponding novel hierarchical Fe3O4/carbon superstructures. The systematic studies on the thermal decomposition process of the three kinds of MIL-88A and the effect of template morphology on the products were carried out in detail. Scanning electron microscopy, transmission electron microscopy, X-ray powder diffraction, X-ray photoelectron spectroscopy and thermal analysis were employed to investigate the hierarchical Fe3O4/carbon superstructures. Based on these resulted hierarchical Fe3O4/carbon superstructures, a novel and sensitive nonenzymatic N-acetyl cysteine sensor was developed. The porous and hierarchical superstructures and large surface area of the as-formed Fe3O4/carbon superstructures eventually contributed to the good electrocatalytic activity of the prepared sensor towards the oxidation of N-acetyl cysteine. The proposed preparation method of the hierarchical Fe3O4/carbon superstructures is simple, efficient, cheap and easy to mass production. It might open up a new way for hierarchical superstructures preparation. PMID:26387535
NASA Astrophysics Data System (ADS)
Li, Bin
Spatial control behaviors account for a large proportion of human everyday activities from normal daily tasks, such as reaching for objects, to specialized tasks, such as driving, surgery, or operating equipment. These behaviors involve intensive interactions within internal processes (i.e. cognitive, perceptual, and motor control) and with the physical world. This dissertation builds on a concept of interaction pattern and a hierarchical functional model. Interaction pattern represents a type of behavior synergy that humans coordinates cognitive, perceptual, and motor control processes. It contributes to the construction of the hierarchical functional model that delineates humans spatial control behaviors as the coordination of three functional subsystems: planning, guidance, and tracking/pursuit. This dissertation formalizes and validates these two theories and extends them for the investigation of human spatial control skills encompassing development and assessment. Specifically, this dissertation first presents an overview of studies in human spatial control skills encompassing definition, characteristic, development, and assessment, to provide theoretical evidence for the concept of interaction pattern and the hierarchical functional model. The following, the human experiments for collecting motion and gaze data and techniques to register and classify gaze data, are described. This dissertation then elaborates and mathematically formalizes the hierarchical functional model and the concept of interaction pattern. These theories then enables the construction of a succinct simulation model that can reproduce a variety of human performance with a minimal set of hypotheses. This validates the hierarchical functional model as a normative framework for interpreting human spatial control behaviors. The dissertation then investigates human skill development and captures the emergence of interaction pattern. The final part of the dissertation applies the hierarchical functional model for skill assessment and introduces techniques to capture interaction patterns both from the top down using their geometric features and from the bottom up using their dynamical characteristics. The validity and generality of the skill assessment is illustrated using two the remote-control flight and laparoscopic surgical training experiments.
The Hierarchical Database Decomposition Approach to Database Concurrency Control.
1984-12-01
approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request
Improvement in Recursive Hierarchical Segmentation of Data
NASA Technical Reports Server (NTRS)
Tilton, James C.
2006-01-01
A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.
NASA Astrophysics Data System (ADS)
León, Madeleine; Escalante-Ramirez, Boris
2013-11-01
Knee osteoarthritis (OA) is characterized by the morphological degeneration of cartilage. Efficient segmentation of cartilage is important for cartilage damage diagnosis and to support therapeutic responses. We present a method for knee cartilage segmentation in magnetic resonance images (MRI). Our method incorporates the Hermite Transform to obtain a hierarchical decomposition of contours which describe knee cartilage shapes. Then, we compute a statistical model of the contour of interest from a set of training images. Thereby, our Hierarchical Active Shape Model (HASM) captures a large range of shape variability even from a small group of training samples, improving segmentation accuracy. The method was trained with a training set of 16- MRI of knee and tested with leave-one-out method.
A hierarchical spatial framework for forest landscape planning.
Pete Bettinger; Marie Lennette; K. Norman Johnson; Thomas A. Spies
2005-01-01
A hierarchical spatial framework for large-scale, long-term forest landscape planning is presented along with example policy analyses for a 560,000 ha area of the Oregon Coast Range. The modeling framework suggests utilizing the detail provided by satellite imagery to track forest vegetation condition and for representation of fine-scale features, such as riparian...
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Titova, I. A.; Barkalov, S. A.
2018-03-01
The article presents an algorithm for obtaining an integral assessment of the quality of an organization from the perspective of customers, based on the method of aggregating linguistic information on a multilevel hierarchical system of quality assessment. The algorithm is of a constructive nature, it provides not only the possibility of obtaining an integral evaluation, but also the development of a quality improvement strategy based on the method of linguistic decomposition, which forms the minimum set of areas of work with clients whose quality change will allow obtaining the required level of integrated quality assessment.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Conversion of methanol to propylene over hierarchical HZSM-5: the effect of Al spatial distribution.
Li, Jianwen; Ma, Hongfang; Chen, Yan; Xu, Zhiqiang; Li, Chunzhong; Ying, Weiyong
2018-06-08
Different silicon sources caused diverse Al spatial distribution in HZSM-5, and this affected the hierarchical structures and catalytic performance of desilicated zeolites. After being treated with 0.1 M NaOH, HZSM-5 zeolites synthesized with silica sol exhibited relatively widely distributed mesopores and channels, and possessed highly improved propylene selectivity and activity stability.
Sharon E. Clarke; Sandra A. Bryce
1997-01-01
This document presents two spatial scales of a hierarchical, ecoregional framework and provides a connection to both larger and smaller scale ecological classifications. The two spatial scales are subregions (1:250,000) and landscape-level ecoregions (1:100,000), or Level IV and Level V ecoregions. Level IV ecoregions were developed by the Environmental Protection...
Shpotyuk, Oleh; Ingram, Adam; Bujňáková, Zdenka; Baláž, Peter
2017-12-01
Microstructure hierarchical model considering the free-volume elements at the level of interacting crystallites (non-spherical approximation) and the agglomerates of these crystallites (spherical approximation) was developed to describe free-volume evolution in mechanochemically milled As 4 S 4 /ZnS composites employing positron annihilation spectroscopy in a lifetime measuring mode. Positron lifetime spectra were reconstructed from unconstrained three-term decomposition procedure and further subjected to parameterization using x3-x2-coupling decomposition algorithm. Intrinsic inhomogeneities due to coarse-grained As 4 S 4 and fine-grained ZnS nanoparticles were adequately described in terms of substitution trapping in positron and positronium (Ps) (bound positron-electron) states due to interfacial triple junctions between contacting particles and own free-volume defects in boundary compounds. Compositionally dependent nanostructurization in As 4 S 4 /ZnS nanocomposite system was imagined as conversion from o-Ps trapping sites to positron traps. The calculated trapping parameters that were shown could be useful to characterize adequately the nanospace filling in As 4 S 4 /ZnS composites.
van der Ham, Joris L
2016-05-19
Forensic entomologists can use carrion communities' ecological succession data to estimate the postmortem interval (PMI). Permutation tests of hierarchical cluster analyses of these data provide a conceptual method to estimate part of the PMI, the post-colonization interval (post-CI). This multivariate approach produces a baseline of statistically distinct clusters that reflect changes in the carrion community composition during the decomposition process. Carrion community samples of unknown post-CIs are compared with these baseline clusters to estimate the post-CI. In this short communication, I use data from previously published studies to demonstrate the conceptual feasibility of this multivariate approach. Analyses of these data produce series of significantly distinct clusters, which represent carrion communities during 1- to 20-day periods of the decomposition process. For 33 carrion community samples, collected over an 11-day period, this approach correctly estimated the post-CI within an average range of 3.1 days. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Xiao, Yuanhua; Zhang, Aiqin; Liu, Shaojun; Zhao, Jihong; Fang, Shaoming; Jia, Dianzeng; Li, Feng
2012-12-01
Free-standing and porous hierarchical nanoarchitectures constructed with cobalt cobaltite (Co3O4) nanowalls have been successfully synthesized in large scale by calcining three dimensional (3D) hierarchical nanostructures consisting of single crystalline cobalt carbonate hydroxide hydrate - Co(CO3)0.5(OH)·0.11H2O nanowalls prepared with a solvothermal method. The step-by-step decomposition of the precursor can generate porous Co3O4 nanowalls with BET surface area of 88.34 m2 g-1. The as-prepared Co3O4 nanoarchitectures show superior specific capacitance to the most Co3O4 supercapacitor electrode materials to date. After continuously cycled for 1000 times of charge-discharge at 4 A g-1, the supercapacitors can retain ca 92.3% of their original specific capacitances. The excellent performances of the devices can be attributed to the porous and hierarchical 3D nanostructure of the materials.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
Ficken, Cari D; Wright, Justin P
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system.
Wright, Justin P.
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system. PMID:29023560
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2008-02-01
A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2014-12-25
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world.
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
NASA Astrophysics Data System (ADS)
Wang, Guoxu; Liu, Meng; Du, Juan; Liu, Lei; Yu, Yifeng; Sha, Jitong; Chen, Aibing
2018-03-01
The membrane carbon materials with hierarchical porous architecture are attractive because they can provide more channels for ion transport and shorten the ions transport path. Herein, we develop a facile way based on "confined nanospace deposition" to fabricate N-dopi-ng three dimensional hierarchical porous membrane carbon material (N-THPMC) via coating the nickel nitrate, silicate oligomers and triblock copolymer P123 on the branches of commercial polyamide membrane (PAM). During high temperature treatment, the mesoporous silica layer and Ni species serve as a "confined nanospace" and catalyst respectively, which are indispensable elements for formation of carbon framework, and the gas-phase carbon precursors which derive from the decomposition of PAM are deposited into the "confined nanospace" forming carbon framework. The N-THPMC with hierarchical macro/meso/microporous structure, N-doping (2.9%) and large specific surface area (994m2 g-1) well inherits the membrane morphology and hierarchical porous structure of PAM. The N-THPMC as electrode without binder exhibits a specific capacitance of 252 F g-1 at the current density of 1 A g-1 in 6 M KOH electrolyte and excellent cycling stability of 92.7% even after 5000 cycles.
Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn
2011-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.
Maragoudakis, Manolis; Lymberopoulos, Dimitrios; Fakotakis, Nikos; Spiropoulos, Kostas
2008-01-01
The present paper extends work on an existing computer-based Decision Support System (DSS) that aims to provide assistance to physicians as regards to pulmonary diseases. The extension deals with allowing for a hierarchical decomposition of the task, at different levels of domain granularity, using a novel approach, i.e. Hierarchical Bayesian Networks. The proposed framework uses data from various networking appliances such as mobile phones and wireless medical sensors to establish a ubiquitous environment for medical treatment of pulmonary diseases. Domain knowledge is encoded at the upper levels of the hierarchy, thus making the process of generalization easier to accomplish. The experimental results were carried out under the Pulmonary Department, University Regional Hospital Patras, Patras, Greece. They have supported our initial beliefs about the ability of Bayesian networks to provide an effective, yet semantically-oriented, means of prognosis and reasoning under conditions of uncertainty.
NASA Astrophysics Data System (ADS)
Marston, B. K.; Bishop, M. P.; Shroder, J. F.
2009-12-01
Digital terrain analysis of mountain topography is widely utilized for mapping landforms, assessing the role of surface processes in landscape evolution, and estimating the spatial variation of erosion. Numerous geomorphometry techniques exist to characterize terrain surface parameters, although their utility to characterize the spatial hierarchical structure of the topography and permit an assessment of the erosion/tectonic impact on the landscape is very limited due to scale and data integration issues. To address this problem, we apply scale-dependent geomorphometric and object-oriented analyses to characterize the hierarchical spatial structure of mountain topography. Specifically, we utilized a high resolution digital elevation model to characterize complex topography in the Shimshal Valley in the Western Himalaya of Pakistan. To accomplish this, we generate terrain objects (geomorphological features and landform) including valley floors and walls, drainage basins, drainage network, ridge network, slope facets, and elemental forms based upon curvature. Object-oriented analysis was used to characterize object properties accounting for object size, shape, and morphometry. The spatial overlay and integration of terrain objects at various scales defines the nature of the hierarchical organization. Our results indicate that variations in the spatial complexity of the terrain hierarchical organization is related to the spatio-temporal influence of surface processes and landscape evolution dynamics. Terrain segmentation and the integration of multi-scale terrain information permits further assessment of process domains and erosion, tectonic impact potential, and natural hazard potential. We demonstrate this with landform mapping and geomorphological assessment examples.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.
Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J
2010-12-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies conservation planning. Journal compilation © 2010 Society for Conservation Biology. No claim to original US government works.
Štursová, Martina; Bárta, Jiří; Šantrůčková, Hana; Baldrian, Petr
2016-12-01
Forests are recognised as spatially heterogeneous ecosystems. However, knowledge of the small-scale spatial variation in microbial abundance, community composition and activity is limited. Here, we aimed to describe the heterogeneity of environmental properties, namely vegetation, soil chemical composition, fungal and bacterial abundance and community composition, and enzymatic activity, in the topsoil in a small area (36 m 2 ) of a highly heterogeneous regenerating temperate natural forest, and to explore the relationships among these variables. The results demonstrated a high level of spatial heterogeneity in all properties and revealed differences between litter and soil. Fungal communities had substantially higher beta-diversity than bacterial communities, which were more uniform and less spatially autocorrelated. In litter, fungal communities were affected by vegetation and appeared to be more involved in decomposition. In the soil, chemical composition affected both microbial abundance and the rates of decomposition, whereas the effect of vegetation was small. Importantly, decomposition appeared to be concentrated in hotspots with increased activity of multiple enzymes. Overall, forest topsoil should be considered a spatially heterogeneous environment in which the mean estimates of ecosystem-level processes and microbial community composition may confound the existence of highly specific microenvironments. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
López-Carr, David; Davis, Jason; Jankowska, Marta; Grant, Laura; López-Carr, Anna Carla; Clark, Matthew
2013-01-01
The relative role of space and place has long been debated in geography. Yet modeling efforts applied to coupled human-natural systems seemingly favor models assuming continuous spatial relationships. We examine the relative importance of placebased hierarchical versus spatial clustering influences in tropical land use/cover change (LUCC). Guatemala was chosen as our study site given its high rural population growth and deforestation in recent decades. We test predictors of 2009 forest cover and forest cover change from 2001-2009 across Guatemala's 331 municipalities and 22 departments using spatial and multi-level statistical models. Our results indicate the emergence of several socio-economic predictors of LUCC regardless of model choice. Hierarchical model results suggest that significant differences exist at the municipal and departmental levels but largely maintain the magnitude and direction of single-level model coefficient estimates. They are also intervention-relevant since policies tend to be applicable to distinct political units rather than to continuous space. Spatial models complement hierarchical approaches by indicating where and to what magnitude significant negative and positive clustering associations emerge. Appreciating the comparative advantages and limitations of spatial and nested models enhances a holistic approach to geographical analysis of tropical LUCC and human-environment interactions. PMID:24013908
An improved spatial contour tree constructed method
NASA Astrophysics Data System (ADS)
Zheng, Yi; Zhang, Ling; Guilbert, Eric; Long, Yi
2018-05-01
Contours are important data to delineate the landform on a map. A contour tree provides an object-oriented description of landforms and can be used to enrich the topological information. The traditional contour tree is used to store topological relationships between contours in a hierarchical structure and allows for the identification of eminences and depressions as sets of nested contours. This research proposes an improved contour tree so-called spatial contour tree that contains not only the topological but also the geometric information. It can be regarded as a terrain skeleton in 3-dimention, and it is established based on the spatial nodes of contours which have the latitude, longitude and elevation information. The spatial contour tree is built by connecting spatial nodes from low to high elevation for a positive landform, and from high to low elevation for a negative landform to form a hierarchical structure. The connection between two spatial nodes can provide the real distance and direction as a Euclidean vector in 3-dimention. In this paper, the construction method is tested in the experiment, and the results are discussed. The proposed hierarchical structure is in 3-demintion and can show the skeleton inside a terrain. The structure, where all nodes have geo-information, can be used to distinguish different landforms and applied for contour generalization with consideration of geographic characteristics.
The Design Manager's Aid for Intelligent Decomposition (DeMAID)
NASA Technical Reports Server (NTRS)
Rogers, James L.
1994-01-01
Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.
Lien, Mei-Ching; Ruthruff, Eric
2004-05-01
This study examined how task switching is affected by hierarchical task organization. Traditional task-switching studies, which use a constant temporal and spatial distance between each task element (defined as a stimulus requiring a response), promote a flat task structure. Using this approach, Experiment 1 revealed a large switch cost of 238 ms. In Experiments 2-5, adjacent task elements were grouped temporally and/or spatially (forming an ensemble) to create a hierarchical task organization. Results indicate that the effect of switching at the ensemble level dominated the effect of switching at the element level. Experiments 6 and 7, using an ensemble of 3 task elements, revealed that the element-level switch cost was virtually absent between ensembles but was large within an ensemble. The authors conclude that the element-level task repetition benefit is fragile and can be eliminated in a hierarchical task organization.
NASA Technical Reports Server (NTRS)
Lien, Mei-Ching; Ruthruff, Eric
2004-01-01
This study examined how task switching is affected by hierarchical task organization. Traditional task-switching studies, which use a constant temporal and spatial distance between each task element (defined as a stimulus requiring a response), promote a flat task structure. Using this approach, Experiment 1 revealed a large switch cost of 238 ms. In Experiments 2-5, adjacent task elements were grouped temporally and/or spatially (forming an ensemble) to create a hierarchical task organization. Results indicate that the effect of switching at the ensemble level dominated the effect of switching at the element level. Experiments 6 and 7, using an ensemble of 3 task elements, revealed that the element-level switch cost was virtually absent between ensembles but was large within an ensemble. The authors conclude that the element-level task repetition benefit is fragile and can be eliminated in a hierarchical task organization.
The Spatial Variability of Organic Matter and Decomposition Processes at the Marsh Scale
NASA Astrophysics Data System (ADS)
Yousefi Lalimi, Fateme; Silvestri, Sonia; D'Alpaos, Andrea; Roner, Marcella; Marani, Marco
2017-04-01
Coastal salt marshes sequester carbon as they respond to the local Rate of Relative Sea Level Rise (RRSLR) and their accretion rate is governed by inorganic soil deposition, organic soil production, and soil organic matter (SOM) decomposition. It is generally recognized that SOM plays a central role in marsh vertical dynamics, but while existing limited observations and modelling results suggest that SOME varies widely at the marsh scale, we lack systematic observations aimed at understanding how SOM production is modulated spatially as a result of biomass productivity and decomposition rate. Marsh topography and distance to the creek can affect biomass and SOM production, while a higher topographic elevation increases drainage, evapotranspiration, aeration, thereby likely inducing higher SOM decomposition rates. Data collected in salt marshes in the northern Venice Lagoon (Italy) show that, even though plant productivity decreases in the lower areas of a marsh located farther away from channel edges, the relative contribution of organic soil production to the overall vertical soil accretion tends to remain constant as the distance from the channel increases. These observations suggest that the competing effects between biomass production and aeration/decomposition determine a contribution of organic soil to total accretion which remains approximately constant with distance from the creek, in spite of the declining plant productivity. Here we test this hypothesis using new observations of SOM and decomposition rates from marshes in North Carolina. The objective is to fill the gap in our understanding of the spatial distribution, at the marsh scale, of the organic and inorganic contributions to marsh accretion in response to RRSLR.
Wang, Bowen; Zhang, Weigang; Wang, Lei; Wei, Jiake; Bai, Xuedong; Liu, Jingyue; Zhang, Guanhua; Duan, Huigao
2018-07-06
Design and synthesis of integrated, interconnected porous structures are critical to the development of high-performance supercapacitors. We develop a novel and facile synthesis technic to construct three-dimensional carbon-bubble foams with hierarchical pores geometry. The carbon-bubble foams are fabricated by conformally coating, via catalytic decomposition of ethanol, a layer of carbon coating onto the surfaces of pre-formed ZnO foams and then the removal of the ZnO template by a reduction-evaporation process. Both the wall thickness and the pore size can be well tuned by adjusting the catalytic decomposition time and temperature. The as-synthesized carbon-bubble foams electrode retains 90.3% of the initial capacitance even after 70 000 continuous cycles under a high current density of 20 A g -1 , demonstrating excellent long-time electrochemical and cycling stability. The symmetric device displays rate capability retention of 81.8% with the current density increasing from 0.4 to 20 A g -1 . These achieved electrochemical performances originate from the unique structural design of the carbon-bubble foams, which provide not only abundant transport channels for electron and ion but also high active surface area accessible by the electrolyte ions.
Liu, Hao; Zhu, Lili; Bai, Shuming; Shi, Qiang
2014-04-07
We investigated applications of the hierarchical equation of motion (HEOM) method to perform high order perturbation calculations of reduced quantum dynamics for a harmonic bath with arbitrary spectral densities. Three different schemes are used to decompose the bath spectral density into analytical forms that are suitable to the HEOM treatment: (1) The multiple Lorentzian mode model that can be obtained by numerically fitting the model spectral density. (2) The combined Debye and oscillatory Debye modes model that can be constructed by fitting the corresponding classical bath correlation function. (3) A new method that uses undamped harmonic oscillator modes explicitly in the HEOM formalism. Methods to extract system-bath correlations were investigated for the above bath decomposition schemes. We also show that HEOM in the undamped harmonic oscillator modes can give detailed information on the partial Wigner transform of the total density operator. Theoretical analysis and numerical simulations of the spin-Boson dynamics and the absorption line shape of molecular dimers show that the HEOM formalism for high order perturbations can serve as an important tool in studying the quantum dissipative dynamics in the intermediate coupling regime.
Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.
Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun
2017-07-25
This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Zhu, Lili; Bai, Shuming
2014-04-07
We investigated applications of the hierarchical equation of motion (HEOM) method to perform high order perturbation calculations of reduced quantum dynamics for a harmonic bath with arbitrary spectral densities. Three different schemes are used to decompose the bath spectral density into analytical forms that are suitable to the HEOM treatment: (1) The multiple Lorentzian mode model that can be obtained by numerically fitting the model spectral density. (2) The combined Debye and oscillatory Debye modes model that can be constructed by fitting the corresponding classical bath correlation function. (3) A new method that uses undamped harmonic oscillator modes explicitly inmore » the HEOM formalism. Methods to extract system-bath correlations were investigated for the above bath decomposition schemes. We also show that HEOM in the undamped harmonic oscillator modes can give detailed information on the partial Wigner transform of the total density operator. Theoretical analysis and numerical simulations of the spin-Boson dynamics and the absorption line shape of molecular dimers show that the HEOM formalism for high order perturbations can serve as an important tool in studying the quantum dissipative dynamics in the intermediate coupling regime.« less
NASA Astrophysics Data System (ADS)
Wang, Bowen; Zhang, Weigang; Wang, Lei; Wei, Jiake; Bai, Xuedong; Liu, Jingyue; Zhang, Guanhua; Duan, Huigao
2018-07-01
Design and synthesis of integrated, interconnected porous structures are critical to the development of high-performance supercapacitors. We develop a novel and facile synthesis technic to construct three-dimensional carbon-bubble foams with hierarchical pores geometry. The carbon-bubble foams are fabricated by conformally coating, via catalytic decomposition of ethanol, a layer of carbon coating onto the surfaces of pre-formed ZnO foams and then the removal of the ZnO template by a reduction-evaporation process. Both the wall thickness and the pore size can be well tuned by adjusting the catalytic decomposition time and temperature. The as-synthesized carbon-bubble foams electrode retains 90.3% of the initial capacitance even after 70 000 continuous cycles under a high current density of 20 A g‑1, demonstrating excellent long-time electrochemical and cycling stability. The symmetric device displays rate capability retention of 81.8% with the current density increasing from 0.4 to 20 A g‑1. These achieved electrochemical performances originate from the unique structural design of the carbon-bubble foams, which provide not only abundant transport channels for electron and ion but also high active surface area accessible by the electrolyte ions.
NASA Astrophysics Data System (ADS)
Western, A. W.; Lintern, A.; Liu, S.; Ryu, D.; Webb, J. A.; Leahy, P.; Wilson, P.; Waters, D.; Bende-Michl, U.; Watson, M.
2016-12-01
Many streams, lakes and estuaries are experiencing increasing concentrations and loads of nutrient and sediments. Models that can predict the spatial and temporal variability in water quality of aquatic systems are required to help guide the management and restoration of polluted aquatic systems. We propose that a Bayesian hierarchical modelling framework could be used to predict water quality responses over varying spatial and temporal scales. Stream water quality data and spatial data of catchment characteristics collected throughout Victoria and Queensland (in Australia) over two decades will be used to develop this Bayesian hierarchical model. In this paper, we present the preliminary exploratory data analysis required for the development of the Bayesian hierarchical model. Specifically, we present the results of exploratory data analysis of Total Nitrogen (TN) concentrations in rivers in Victoria (in South-East Australia) to illustrate the catchment characteristics that appear to be influencing spatial variability in (1) mean concentrations of TN; and (2) the relationship between discharge and TN throughout the state. These important catchment characteristics were identified using: (1) monthly TN concentrations measured at 28 water quality gauging stations and (2) climate, land use, topographic and geologic characteristics of the catchments of these 28 sites. Spatial variability in TN concentrations had a positive correlation to fertiliser use in the catchment and average temperature. There were negative correlations between TN concentrations and catchment forest cover, annual runoff, runoff perenniality, soil erosivity and catchment slope. The relationship between discharge and TN concentrations showed spatial variability, possibly resulting from climatic and topographic differences between the sites. The results of this study will feed into the hierarchical Bayesian model of river water quality.
Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.
Yang, Shuyuan; Zhang, Kai; Wang, Min
2017-08-25
Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.
Hierarchical drivers of reef-fish metacommunity structure.
MacNeil, M Aaron; Graham, Nicholas A J; Polunin, Nicholas V C; Kulbicki, Michel; Galzin, René; Harmelin-Vivien, Mireille; Rushton, Steven P
2009-01-01
Coral reefs are highly complex ecological systems, where multiple processes interact across scales in space and time to create assemblages of exceptionally high biodiversity. Despite the increasing frequency of hierarchically structured sampling programs used in coral-reef science, little progress has been made in quantifying the relative importance of processes operating across multiple scales. The vast majority of reef studies are conducted, or at least analyzed, at a single spatial scale, ignoring the implicitly hierarchical structure of the overall system in favor of small-scale experiments or large-scale observations. Here we demonstrate how alpha (mean local number of species), beta diversity (degree of species dissimilarity among local sites), and gamma diversity (overall species richness) vary with spatial scale, and using a hierarchical, information-theoretic approach, we evaluate the relative importance of site-, reef-, and atoll-level processes driving the fish metacommunity structure among 10 atolls in French Polynesia. Process-based models, representing well-established hypotheses about drivers of reef-fish community structure, were assembled into a candidate set of 12 hierarchical linear models. Variation in fish abundance, biomass, and species richness were unevenly distributed among transect, reef, and atoll levels, establishing the relative contribution of variation at these spatial scales to the structure of the metacommunity. Reef-fish biomass, species richness, and the abundance of most functional-groups corresponded primarily with transect-level habitat diversity and atoll-lagoon size, whereas detritivore and grazer abundances were largely correlated with potential covariates of larval dispersal. Our findings show that (1) within-transect and among-atoll factors primarily drive the relationship between alpha and gamma diversity in this reef-fish metacommunity; (2) habitat is the primary correlate with reef-fish metacommunity structure at multiple spatial scales; and (3) inter-atoll connectedness was poorly correlated with the nonrandom clustering of reef-fish species. These results demonstrate the importance of modeling hierarchical data and processes in understanding reef-fish metacommunity structure.
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2015-01-01
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world. PMID:25609048
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Multifractality in Cardiac Dynamics
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.; Rosenblum, Misha; Stanley, H. Eugene; Havlin, Shlomo; Goldberger, Ary
1997-03-01
Wavelet decomposition is used to analyze the fractal scaling properties of heart beat time series. The singularity spectrum D(h) of the variations in the beat-to-beat intervals is obtained from the wavelet transform modulus maxima which contain information on the hierarchical distribution of the singularities in the signal. Multifractal behavior is observed for healthy cardiac dynamics while pathologies are associated with loss of support in the singularity spectrum.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Heterogeneous fractionation profiles of meta-analytic coactivation networks.
Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T
2017-04-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.
Heterogeneous fractionation profiles of meta-analytic coactivation networks
Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.
2017-01-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Ecoregions of the conterminous United States: evolution of a hierarchical spatial framework
Omernik, James M.; Griffith, Glenn E.
2014-01-01
A map of ecological regions of the conterminous United States, first published in 1987, has been greatly refined and expanded into a hierarchical spatial framework in response to user needs, particularly by state resource management agencies. In collaboration with scientists and resource managers from numerous agencies and institutions in the United States, Mexico, and Canada, the framework has been expanded to cover North America, and the original ecoregions (now termed Level III) have been refined, subdivided, and aggregated to identify coarser as well as more detailed spatial units. The most generalized units (Level I) define 10 ecoregions in the conterminous U.S., while the finest-scale units (Level IV) identify 967 ecoregions. In this paper, we explain the logic underpinning the approach, discuss the evolution of the regional mapping process, and provide examples of how the ecoregions were distinguished at each hierarchical level. The variety of applications of the ecoregion framework illustrates its utility in resource assessment and management.
Ecoregions of the Conterminous United States: Evolution of a Hierarchical Spatial Framework
NASA Astrophysics Data System (ADS)
Omernik, James M.; Griffith, Glenn E.
2014-12-01
A map of ecological regions of the conterminous United States, first published in 1987, has been greatly refined and expanded into a hierarchical spatial framework in response to user needs, particularly by state resource management agencies. In collaboration with scientists and resource managers from numerous agencies and institutions in the United States, Mexico, and Canada, the framework has been expanded to cover North America, and the original ecoregions (now termed Level III) have been refined, subdivided, and aggregated to identify coarser as well as more detailed spatial units. The most generalized units (Level I) define 10 ecoregions in the conterminous U.S., while the finest-scale units (Level IV) identify 967 ecoregions. In this paper, we explain the logic underpinning the approach, discuss the evolution of the regional mapping process, and provide examples of how the ecoregions were distinguished at each hierarchical level. The variety of applications of the ecoregion framework illustrates its utility in resource assessment and management.
Hierarchical acquisition of visual specificity in spatial contextual cueing.
Lie, Kin-Pou
2015-01-01
Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.
NASA Astrophysics Data System (ADS)
Balguri, Praveen Kumar; Harris Samuel, D. G.; Thumu, Udayabhaskararao
2017-09-01
In this work, we presented the potentiality of monodispersed 3D hierarchical α-Fe2O3 nanoflowers (α-Fe2O3) as reinforcement for epoxy polymer. α-Fe2O3 are synthesized through the thermal decomposition of iron alkoxide precursor in ethylene glycol. α-Fe2O3/epoxy nanocomposites (0.1 wt% of α-Fe2O3) show 109%, 59%, 13%, and 15% enhancement in impact (un-notched), impact (notched), flexural and tensile properties, respectively. The uniformly embedded α- Fe2O3 nanoflowers in epoxy polymer not only provide mechanical strength but also induced magnetic nature to the nanocomposite as observed from the Scanning electron microscopy and vibrating sample magnetometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
An intelligent decomposition approach for efficient design of non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1992-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.
NASA Astrophysics Data System (ADS)
Parlett, Christopher M. A.; Isaacs, Mark A.; Beaumont, Simon K.; Bingham, Laura M.; Hondow, Nicole S.; Wilson, Karen; Lee, Adam F.
2016-02-01
The chemical functionality within porous architectures dictates their performance as heterogeneous catalysts; however, synthetic routes to control the spatial distribution of individual functions within porous solids are limited. Here we report the fabrication of spatially orthogonal bifunctional porous catalysts, through the stepwise template removal and chemical functionalization of an interconnected silica framework. Selective removal of polystyrene nanosphere templates from a lyotropic liquid crystal-templated silica sol-gel matrix, followed by extraction of the liquid crystal template, affords a hierarchical macroporous-mesoporous architecture. Decoupling of the individual template extractions allows independent functionalization of macropore and mesopore networks on the basis of chemical and/or size specificity. Spatial compartmentalization of, and directed molecular transport between, chemical functionalities affords control over the reaction sequence in catalytic cascades; herein illustrated by the Pd/Pt-catalysed oxidation of cinnamyl alcohol to cinnamic acid. We anticipate that our methodology will prompt further design of multifunctional materials comprising spatially compartmentalized functions.
Wimmer, Klaus; Compte, Albert; Roxin, Alex; Peixoto, Diogo; Renart, Alfonso; de la Rocha, Jaime
2015-01-01
Neuronal variability in sensory cortex predicts perceptual decisions. This relationship, termed choice probability (CP), can arise from sensory variability biasing behaviour and from top-down signals reflecting behaviour. To investigate the interaction of these mechanisms during the decision-making process, we use a hierarchical network model composed of reciprocally connected sensory and integration circuits. Consistent with monkey behaviour in a fixed-duration motion discrimination task, the model integrates sensory evidence transiently, giving rise to a decaying bottom-up CP component. However, the dynamics of the hierarchical loop recruits a concurrently rising top-down component, resulting in sustained CP. We compute the CP time-course of neurons in the medial temporal area (MT) and find an early transient component and a separate late contribution reflecting decision build-up. The stability of individual CPs and the dynamics of noise correlations further support this decomposition. Our model provides a unified understanding of the circuit dynamics linking neural and behavioural variability. PMID:25649611
A hierarchical model for spatial capture-recapture data
Royle, J. Andrew; Young, K.V.
2008-01-01
Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
Background/Questions/Methods Threats to marine and estuarine species operate over many spatial scales, from nutrient enrichment at the watershed/estuary scale to climate change at a global scale. To address this range of environmental issues, we developed a hierarchical framewor...
Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang
2017-12-01
A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
García-Palacios, Pablo; Maestre, Fernando T.; Kattge, Jens; Wall, Diana H.
2015-01-01
Climate and litter quality have been identified as major drivers of litter decomposition at large spatial scales. However, the role played by soil fauna remains largely unknown, despite its importance for litter fragmentation and microbial activity. We synthesized litterbag studies to quantify the effect sizes of soil fauna on litter decomposition rates at the global and biome scales, and to assess how climate, litter quality and soil fauna interact to determine such rates. Soil fauna consistently enhanced litter decomposition at both global and biome scales (average increment ~27%). However, climate and litter quality differently modulated the effects of soil fauna on decomposition rates between biomes, from climate-driven biomes to those where climate effects were mediated by changes in litter quality. Our results advocate for the inclusion of biome-specific soil fauna effects on litter decomposition as a mean to reduce the unexplained variation in large-scale decomposition models. PMID:23763716
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.
An MPI + $X$ implementation of contact global search using Kokkos
Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...
2015-10-05
This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less
Hierarchical Bayesian Model (HBM)-Derived Estimates of Air Quality for 2004 - Annual Report
This report describes EPA's Hierarchical Bayesian model-generated (HBM) estimates of O3 and PM2.5 concentrations throughout the continental United States during the 2004 calendar year. HBM estimates provide the spatial and temporal variance of O3 ...
Organizational and Spatial Dynamics of Attentional Focusing in Hierarchically Structured Objects
ERIC Educational Resources Information Center
Yeari, Menahem; Goldsmith, Morris
2011-01-01
Is the focusing of visual attention object-based, space-based, both, or neither? Attentional focusing latencies in hierarchically structured compound-letter objects were examined, orthogonally manipulating global size (larger vs. smaller) and organizational complexity (two-level structure vs. three-level structure). In a dynamic focusing task,…
Modal decomposition of turbulent supersonic cavity
NASA Astrophysics Data System (ADS)
Soni, R. K.; Arya, N.; De, A.
2018-06-01
Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance
Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.
2010-01-01
Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.
Spatial scaling of non-native fish richness across the United States
Qinfeng Guo; Julian D. Olden
2014-01-01
A major goal and challenge of invasion ecology is to describe and interpret spatial and temporal patterns of species invasions. Here, we examined fish invasion patterns at four spatially structured and hierarchically nested scales across the contiguous United States (i.e., from large to small: region, basin, watershed, and sub-watershed). All spatial relationships in...
Pita, Ricardo; Lambin, Xavier; Mira, António; Beja, Pedro
2016-09-01
According to ecological theory, the coexistence of competitors in patchy environments may be facilitated by hierarchical spatial segregation along axes of environmental variation, but empirical evidence is limited. Cabrera and water voles show a metapopulation-like structure in Mediterranean farmland, where they are known to segregate along space, habitat, and time axes within habitat patches. Here, we assess whether segregation also occurs among and within landscapes, and how this is influenced by patch-network and matrix composition. We surveyed 75 landscapes, each covering 78 ha, where we mapped all habitat patches potentially suitable for Cabrera and water voles, and the area effectively occupied by each species (extent of occupancy). The relatively large water vole tended to be the sole occupant of landscapes with high habitat amount but relatively low patch density (i.e., with a few large patches), and with a predominantly agricultural matrix, whereas landscapes with high patch density (i.e., many small patches) and low agricultural cover, tended to be occupied exclusively by the small Cabrera vole. The two species tended to co-occur in landscapes with intermediate patch-network and matrix characteristics, though their extents of occurrence were negatively correlated after controlling for environmental effects. In combination with our previous studies on the Cabrera-water vole system, these findings illustrated empirically the occurrence of hierarchical spatial segregation, ranging from within-patches to among-landscapes. Overall, our study suggests that recognizing the hierarchical nature of spatial segregation patterns and their major environmental drivers should enhance our understanding of species coexistence in patchy environments.
Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan; Su, Cheng-Yong
2017-11-13
Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
3D hierarchical spatial representation and memory of multimodal sensory data
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.
Chatterjee, Nivedita; Sinha, Sitabhra
2008-01-01
The nervous system of the nematode C. elegans provides a unique opportunity to understand how behavior ('mind') emerges from activity in the nervous system ('brain') of an organism. The hermaphrodite worm has only 302 neurons, all of whose connections (synaptic and gap junctional) are known. Recently, many of the functional circuits that make up its behavioral repertoire have begun to be identified. In this paper, we investigate the hierarchical structure of the nervous system through k-core decomposition and find it to be intimately related to the set of all known functional circuits. Our analysis also suggests a vital role for the lateral ganglion in processing information, providing an essential connection between the sensory and motor components of the C. elegans nervous system.
Attention to Hierarchical Level Influences Attentional Selection of Spatial Scale
ERIC Educational Resources Information Center
Flevaris, Anastasia V.; Bentin, Shlomo; Robertson, Lynn C.
2011-01-01
Ample evidence suggests that global perception may involve low spatial frequency (LSF) processing and that local perception may involve high spatial frequency (HSF) processing (Shulman, Sullivan, Gish, & Sakoda, 1986; Shulman & Wilson, 1987; Robertson, 1996). It is debated whether SF selection is a low-level mechanism associating global…
We studied the spatial and temporal patterns of decomposition of roots of a desert sub-shrub, a herbaceous annual, and four species of perennial grasses at several locations on nitrogen fertilized and unfertilized transects on a Chihuahuan Desert watershed for 3.5 years. There we...
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.
2013-12-01
Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Hierarchical Goal Network Planning: Initial Results
2011-05-31
svikas@cs.umd.edu Ugur Kuter Smart Information Flow Technologies 211 North 1st Street Minneapolis, MN 55401 USA ukuter@sift.net Dana S. Nau Dept. of...inferred. References [1] Ron Alford, Ugur Kuter, and Dana S. Nau. Translating HTNs to PDDL: A small amount of domain knowledge can go a long way. In...10] Ugur Kuter, Dana S. Nau, Marco Pistore, and Paolo Traverso. Task decomposition on abstract states, for planning under nondeterminism. Artif
Structure and information in spatial segregation
2017-01-01
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323
Structure and information in spatial segregation.
Chodrow, Philip S
2017-10-31
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
HiPS - Hierarchical Progressive Survey Version 1.0
NASA Astrophysics Data System (ADS)
Fernique, Pierre; Allen, Mark; Boch, Thomas; Donaldson, Tom; Durand, Daniel; Ebisawa, Ken; Michel, Laurent; Salgado, Jesus; Stoehr, Felix; Fernique, Pierre
2017-05-01
This document presents HiPS, a hierarchical scheme for the description, storage and access of sky survey data. The system is based on hierarchical tiling of sky regions at finer and finer spatial resolution which facilitates a progressive view of a survey, and supports multi-resolution zooming and panning. HiPS uses the HEALPix tessellation of the sky as the basis for the scheme and is implemented as a simple file structure with a direct indexing scheme that leads to practical implementations.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
High-Dimensional Bayesian Geostatistics
Banerjee, Sudipto
2017-01-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920
High-Dimensional Bayesian Geostatistics.
Banerjee, Sudipto
2017-06-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.
Sponge-like silver obtained by decomposition of silver nitrate hexamethylenetetramine complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afanasiev, Pavel, E-mail: pavel.afanasiev@ircelyon.univ-lyon.fr
2016-07-15
Silver nitrate hexamethylenetetramine [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] coordination compound has been prepared via aqueous route and characterized by chemical analysis, XRD and electron microscopy. Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] under hydrogen and under inert has been studied by thermal analysis and mass spectrometry. Thermal decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] proceeds in the range 200–250 °C as a self-propagating rapid redox process accompanied with the release of multiple gases. The decomposition leads to formation of sponge-like silver having hierarchical open pore system with pore size spanning from 10 µm to 10 nm. The as-obtained silver spongesmore » exhibited favorable activity toward H{sub 2}O{sub 2} electrochemical reduction, making them potentially interesting as non-enzyme hydrogen peroxide sensors. - Graphical abstract: Thermal decomposition of silver nitrate hexamethylenetetramine coordination compound [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to sponge like silver that possesses open porous structure and demonstrates interesting properties as an electrochemical hydrogen peroxide sensor. Display Omitted - Highlights: • [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] orthorhombic phase prepared and characterized. • Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to metallic silver sponge with opened porosity. • Ag sponge showed promising properties as a material for hydrogen peroxide sensors.« less
Principles of Temporal Processing Across the Cortical Hierarchy.
Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J
2018-05-02
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Freyre-González, Julio A; Alonso-Pavón, José A; Treviño-Quintanilla, Luis G; Collado-Vides, Julio
2008-10-27
Previous studies have used different methods in an effort to extract the modular organization of transcriptional regulatory networks. However, these approaches are not natural, as they try to cluster strongly connected genes into a module or locate known pleiotropic transcription factors in lower hierarchical layers. Here, we unravel the transcriptional regulatory network of Escherichia coli by separating it into its key elements, thus revealing its natural organization. We also present a mathematical criterion, based on the topological features of the transcriptional regulatory network, to classify the network elements into one of two possible classes: hierarchical or modular genes. We found that modular genes are clustered into physiologically correlated groups validated by a statistical analysis of the enrichment of the functional classes. Hierarchical genes encode transcription factors responsible for coordinating module responses based on general interest signals. Hierarchical elements correlate highly with the previously studied global regulators, suggesting that this could be the first mathematical method to identify global regulators. We identified a new element in transcriptional regulatory networks never described before: intermodular genes. These are structural genes that integrate, at the promoter level, signals coming from different modules, and therefore from different physiological responses. Using the concept of pleiotropy, we have reconstructed the hierarchy of the network and discuss the role of feedforward motifs in shaping the hierarchical backbone of the transcriptional regulatory network. This study sheds new light on the design principles underpinning the organization of transcriptional regulatory networks, showing a novel nonpyramidal architecture composed of independent modules globally governed by hierarchical transcription factors, whose responses are integrated by intermodular genes.
Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658
Repeated decompositions reveal the stability of infomax decomposition of fMRI data
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2010-01-01
In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Sari C. Saunders; Jiquan Chen; Thomas D. Drummer; Thomas R. Crow; Kimberley D. Brosofske; Eric J. Gustafson
2002-01-01
Understanding landscape organization across scales is vital for determining the impacts of management and retaining structurally and functionally diverse ecosystems. We studied the relationships of a functional variable, decomposition, to microclimatic, vegetative and structural features at multiple scales in two distinct landscapes of northern Wisconsin, USA. We hoped...
Fine root dynamics across a chronosequence of upland temperate deciduous forests
Travis W. Idol; Phillip E. Pope; Felix Jr. Ponder
2000-01-01
Following a major disturbance event in forests that removes most of the standing vegetation, patterns of fine root growth, mortality, and decomposition may be altered from the pre-disturbance conditions. The objective of this study was to describe the changes in the seasonal and spatial dynamics of fine root growth, mortality, and decomposition that occur following...
Immobilization and mineralization of N and P by heterotrophic microbes during leaf decomposition
Beth Cheever; Erika Kratzer; Jackson Webster
2012-01-01
According to theory, the rate and stoichiometry of microbial mineralization depend, in part, on nutrient availability. For microbes associated with leaves in streams, nutrients are available from both the water column and the leaf. Therefore, microbial nutrient cycling may change with nutrient availability and during leaf decomposition. We explored spatial and temporal...
García-Palacios, Pablo; Maestre, Fernando T; Kattge, Jens; Wall, Diana H
2013-08-01
Climate and litter quality have been identified as major drivers of litter decomposition at large spatial scales. However, the role played by soil fauna remains largely unknown, despite its importance for litter fragmentation and microbial activity. We synthesised litterbag studies to quantify the effect sizes of soil fauna on litter decomposition rates at the global and biome scales, and to assess how climate, litter quality and soil fauna interact to determine such rates. Soil fauna consistently enhanced litter decomposition at both global and biome scales (average increment ~ 37%). [corrected]. However, climate and litter quality differently modulated the effects of soil fauna on decomposition rates between biomes, from climate-driven biomes to those where climate effects were mediated by changes in litter quality. Our results advocate for the inclusion of biome-specific soil fauna effects on litter decomposition as a mean to reduce the unexplained variation in large-scale decomposition models. © 2013 John Wiley & Sons Ltd/CNRS.
Zhang, Rui; Zhou, Tingting; Wang, Lili; Zhang, Tong
2018-03-21
Highly sensitive and stable gas sensors have attracted much attention because they are the key to innovations in the fields of environment, health, energy savings and security, etc. Sensing materials, which influence the practical sensing performance, are the crucial parts for gas sensors. Metal-organic frameworks (MOFs) are considered as alluring sensing materials for gas sensors because of the possession of high specific surface area, unique morphology, abundant metal sites, and functional linkers. Herein, four kinds of porous hierarchical Co 3 O 4 structures have been selectively controlled by optimizing the thermal decomposition (temperature, rate, and atmosphere) using ZIF-67 as precursor that was obtained from coprecipitation method with the co-assistance of cobalt salt and 2-methylimidazole in the solution of methanol. These hierarchical Co 3 O 4 structures, with controllable cross-linked channels, meso-/micropores, and adjustable surface area, are efficient catalytic materials for gas sensing. Benefits from structural advantages, core-shell, and porous core-shell Co 3 O 4 exhibit enhanced sensing performance compared to those of porous popcorn and nanoparticle Co 3 O 4 to acetone gas. These novel MOF-templated Co 3 O 4 hierarchical structures are so fantastic that they can be expected to be efficient sensing materials for development of low-temperature operating gas sensors.
NASA Technical Reports Server (NTRS)
Caines, P. E.
1999-01-01
The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.
Hierarchical Model for the Analysis of Scattering Data of Complex Materials
Oyedele, Akinola; Mcnutt, Nicholas W.; Rios, Orlando; ...
2016-05-16
Interpreting the results of scattering data for complex materials with a hierarchical structure in which at least one phase is amorphous presents a significant challenge. Often the interpretation relies on the use of large-scale molecular dynamics (MD) simulations, in which a structure is hypothesized and from which a radial distribution function (RDF) can be extracted and directly compared against an experimental RDF. This computationally intensive approach presents a bottleneck in the efficient characterization of the atomic structure of new materials. Here, we propose and demonstrate an approach for a hierarchical decomposition of the RDF in which MD simulations are replacedmore » by a combination of tractable models and theory at the atomic scale and the mesoscale, which when combined yield the RDF. We apply the procedure to a carbon composite, in which graphitic nanocrystallites are distributed in an amorphous domain. We compare the model with the RDF from both MD simulation and neutron scattering data. Ultimately, this procedure is applicable for understanding the fundamental processing-structure-property relationships in complex magnetic materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
1993-03-30
Massachusetts Institute of Technology, Cambridge, MA 02139I ABSTRACT polysilanes." Pyrolysis of these polymers usually The decomposition of polymeric SiC ...of soluble polymeric solids. Pyrolysis of these polymers in argon yielded The precursors were prepared by adding a TiC/A120 3 composite at 12501C...formation of soluble polymeric solids. Pyrolysis described an approach for synthesizing AI2O/ SiC of these polymers in argon yielded TiC/AI203
Hierarchical Porous Carbon Spheres for High-Performance Na-O2 Batteries.
Sun, Bing; Kretschmer, Katja; Xie, Xiuqiang; Munroe, Paul; Peng, Zhangquan; Wang, Guoxiu
2017-12-01
As a new family member of room-temperature aprotic metal-O 2 batteries, Na-O 2 batteries, are attracting growing attention because of their relatively high theoretical specific energy and particularly their uncompromised round-trip efficiency. Here, a hierarchical porous carbon sphere (PCS) electrode that has outstanding properties to realize Na-O 2 batteries with excellent electrochemical performances is reported. The controlled porosity of the PCS electrode, with macropores formed between PCSs and nanopores inside each PCS, enables effective formation/decomposition of NaO 2 by facilitating the electrolyte impregnation and oxygen diffusion to the inner part of the oxygen electrode. In addition, the discharge product of NaO 2 is deposited on the surface of individual PCSs with an unusual conformal film-like morphology, which can be more easily decomposed than the commonly observed microsized NaO 2 cubes in Na-O 2 batteries. A combination of coulometry, X-ray diffraction, and in situ differential electrochemical mass spectrometry provides compelling evidence that the operation of the PCS-based Na-O 2 battery is underpinned by the formation and decomposition of NaO 2 . This work demonstrates that employing nanostructured carbon materials to control the porosity, pore-size distribution of the oxygen electrodes, and the morphology of the discharged NaO 2 is a promising strategy to develop high-performance Na-O 2 batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Song, Zhichao; Ge, Yuanzheng; Luo, Lei; Duan, Hong; Qiu, Xiaogang
2015-12-01
Social contact between individuals is the chief factor for airborne epidemic transmission among the crowd. Social contact networks, which describe the contact relationships among individuals, always exhibit overlapping qualities of communities, hierarchical structure and spatial-correlated. We find that traditional global targeted immunization strategy would lose its superiority in controlling the epidemic propagation in the social contact networks with modular and hierarchical structure. Therefore, we propose a hierarchical targeted immunization strategy to settle this problem. In this novel strategy, importance of the hierarchical structure is considered. Transmission control experiments of influenza H1N1 are carried out based on a modular and hierarchical network model. Results obtained indicate that hierarchical structure of the network is more critical than the degrees of the immunized targets and the modular network layer is the most important for the epidemic propagation control. Finally, the efficacy and stability of this novel immunization strategy have been validated as well.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.
1984-01-01
A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-08-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
ERIC Educational Resources Information Center
Casey, Beth M.; Lombardi, Caitlin McPherran; Pollock, Amanda; Fineman, Bonnie; Pezaris, Elizabeth
2017-01-01
This study investigated longitudinal pathways leading from early spatial skills in first-grade girls to their fifth-grade analytical math reasoning abilities (N = 138). First-grade assessments included spatial skills, verbal skills, addition/subtraction skills, and frequency of choice of a decomposition or retrieval strategy on the…
Pseudospectral reverse time migration based on wavefield decomposition
NASA Astrophysics Data System (ADS)
Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang
2017-05-01
The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.
Ranging through Gabor logons-a consistent, hierarchical approach.
Chang, C; Chatterjee, S
1993-01-01
In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.
Gray, B.R.; Haro, R.J.; Rogala, J.T.; Sauer, J.S.
2005-01-01
1. Macroinvertebrate count data often exhibit nested or hierarchical structure. Examples include multiple measurements along each of a set of streams, and multiple synoptic measurements from each of a set of ponds. With data exhibiting hierarchical structure, outcomes at both sampling (e.g. Within stream) and aggregated (e.g. Stream) scales are often of interest. Unfortunately, methods for modelling hierarchical count data have received little attention in the ecological literature. 2. We demonstrate the use of hierarchical count models using fingernail clam (Family: Sphaeriidae) count data and habitat predictors derived from sampling and aggregated spatial scales. The sampling scale corresponded to that of a standard Ponar grab (0.052 m(2)) and the aggregated scale to impounded and backwater regions within 38-197 km reaches of the Upper Mississippi River. Impounded and backwater regions were resampled annually for 10 years. Consequently, measurements on clams were nested within years. Counts were treated as negative binomial random variates, and means from each resampling event as random departures from the impounded and backwater region grand means. 3. Clam models were improved by the addition of covariates that varied at both the sampling and regional scales. Substrate composition varied at the sampling scale and was associated with model improvements, and reductions (for a given mean) in variance at the sampling scale. Inorganic suspended solids (ISS) levels, measured in the summer preceding sampling, also yielded model improvements and were associated with reductions in variances at the regional rather than sampling scales. ISS levels were negatively associated with mean clam counts. 4. Hierarchical models allow hierarchically structured data to be modelled without ignoring information specific to levels of the hierarchy. In addition, information at each hierarchical level may be modelled as functions of covariates that themselves vary by and within levels. As a result, hierarchical models provide researchers and resource managers with a method for modelling hierarchical data that explicitly recognises both the sampling design and the information contained in the corresponding data.
ERIC Educational Resources Information Center
Matthews, Allison Jane; Martin, Frances Heritage
2009-01-01
Previous research suggests a relationship between spatial attention and phonological decoding in developmental dyslexia. The aim of this study was to examine differences between good and poor phonological decoders in the allocation of spatial attention to global and local levels of hierarchical stimuli. A further aim was to investigate the…
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2 = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.
Use of space-time models to investigate the stability of patterns of disease.
Abellan, Juan Jose; Richardson, Sylvia; Best, Nicky
2008-08-01
The use of Bayesian hierarchical spatial models has become widespread in disease mapping and ecologic studies of health-environment associations. In this type of study, the data are typically aggregated over an extensive time period, thus neglecting the time dimension. The output of purely spatial disease mapping studies is therefore the average spatial pattern of risk over the period analyzed, but the results do not inform about, for example, whether a high average risk was sustained over time or changed over time. We investigated how including the time dimension in disease-mapping models strengthens the epidemiologic interpretation of the overall pattern of risk. We discuss a class of Bayesian hierarchical models that simultaneously characterize and estimate the stable spatial and temporal patterns as well as departures from these stable components. We show how useful rules for classifying areas as stable can be constructed based on the posterior distribution of the space-time interactions. We carry out a simulation study to investigate the sensitivity and specificity of the decision rules we propose, and we illustrate our approach in a case study of congenital anomalies in England. Our results confirm that extending hierarchical disease-mapping models to models that simultaneously consider space and time leads to a number of benefits in terms of interpretation and potential for detection of localized excesses.
Climate fails to predict wood decomposition at regional scales
NASA Astrophysics Data System (ADS)
Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.
2014-07-01
Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.
Background:Bilevel optimization has been recognized as a 2-player Stackelberg game where players are represented as leaders and followers and each pursue their own set of objectives. Hierarchical optimization problems, which are a generalization of bilevel, are especially difficu...
Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...
ERIC Educational Resources Information Center
Lien, Mei-Ching; Ruthruff, Eric
2004-01-01
This study examined how task switching is affected by hierarchical task organization. Traditional task-switching studies, which use a constant temporal and spatial distance between each task element (defined as a stimulus requiring a response), promote a flat task structure. Using this approach, Experiment 1 revealed a large switch cost of 238 ms.…
Dorazio, R.M.; Jelks, H.L.; Jordan, F.
2005-01-01
A statistical modeling framework is described for estimating the abundances of spatially distinct subpopulations of animals surveyed using removal sampling. To illustrate this framework, hierarchical models are developed using the Poisson and negative-binomial distributions to model variation in abundance among subpopulations and using the beta distribution to model variation in capture probabilities. These models are fitted to the removal counts observed in a survey of a federally endangered fish species. The resulting estimates of abundance have similar or better precision than those computed using the conventional approach of analyzing the removal counts of each subpopulation separately. Extension of the hierarchical models to include spatial covariates of abundance is straightforward and may be used to identify important features of an animal's habitat or to predict the abundance of animals at unsampled locations.
Distributed intelligence for supervisory control
NASA Technical Reports Server (NTRS)
Wolfe, W. J.; Raney, S. D.
1987-01-01
Supervisory control systems must deal with various types of intelligence distributed throughout the layers of control. Typical layers are real-time servo control, off-line planning and reasoning subsystems and finally, the human operator. Design methodologies must account for the fact that the majority of the intelligence will reside with the human operator. Hierarchical decompositions and feedback loops as conceptual building blocks that provide a common ground for man-machine interaction are discussed. Examples of types of parallelism and parallel implementation on several classes of computer architecture are also discussed.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Hierarchical image coding with diamond-shaped sub-bands
NASA Technical Reports Server (NTRS)
Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken
1992-01-01
We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.
Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...
In situ analysis of the organic framework in the prismatic layer of mollusc shell.
Tong, Hua; Hu, Jiming; Ma, Wentao; Zhong, Guirong; Yao, Songnian; Cao, Nianxing
2002-06-01
A novel in situ analytic approach was constructed by means of ion sputtering, decalcification and deprotein techniques combining with scanning electron microscopy (SEM) and transmission electron microscope (TEM) ultrastructural analysis. The method was employed to determine the spatial distribution of the organic framework outside and the inner crystal and organic/inorganic interface spatial geometrical relationship in the prismatic layer of cristaris plicate (leach). The results show that there is a substructure of organic matrix in the intracrystalline region. The prismatic layer forms according to strict hierarchical configuration of regular pattern. Each unit of organic template of prismatic layer can uniquely determine the column crystal growth direction, spatial orientation and size. Cavity templates are responsible for supporting. limiting size and shape and determining the crystal growth spatial orientation, while the intracrystal organic matrix is responsible for providing nucleation point and inducing the nucleation process of calcite. The stereo hierarchical fabrication of prismatic layer was elucidated for the first time.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Selecting habitat to survive: the impact of road density on survival in a large carnivore.
Basille, Mathieu; Van Moorter, Bram; Herfindal, Ivar; Martin, Jodie; Linnell, John D C; Odden, John; Andersen, Reidar; Gaillard, Jean-Michel
2013-01-01
Habitat selection studies generally assume that animals select habitat and food resources at multiple scales to maximise their fitness. However, animals sometimes prefer habitats of apparently low quality, especially when considering the costs associated with spatially heterogeneous human disturbance. We used spatial variation in human disturbance, and its consequences on lynx survival, a direct fitness component, to test the Hierarchical Habitat Selection hypothesis from a population of Eurasian lynx Lynx lynx in southern Norway. Data from 46 lynx monitored with telemetry indicated that a high proportion of forest strongly reduced the risk of mortality from legal hunting at the home range scale, while increasing road density strongly increased such risk at the finer scale within the home range. We found hierarchical effects of the impact of human disturbance, with a higher road density at a large scale reinforcing its negative impact at a fine scale. Conversely, we demonstrated that lynx shifted their habitat selection to avoid areas with the highest road densities within their home ranges, thus supporting a compensatory mechanism at fine scale enabling lynx to mitigate the impact of large-scale disturbance. Human impact, positively associated with high road accessibility, was thus a stronger driver of lynx space use at a finer scale, with home range characteristics nevertheless constraining habitat selection. Our study demonstrates the truly hierarchical nature of habitat selection, which aims at maximising fitness by selecting against limiting factors at multiple spatial scales, and indicates that scale-specific heterogeneity of the environment is driving individual spatial behaviour, by means of trade-offs across spatial scales.
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Jacob, Benjamin J; Krapp, Fiorella; Ponce, Mario; Gottuzzo, Eduardo; Griffith, Daniel A; Novak, Robert J
2010-05-01
Spatial autocorrelation is problematic for classical hierarchical cluster detection tests commonly used in multi-drug resistant tuberculosis (MDR-TB) analyses as considerable random error can occur. Therefore, when MDRTB clusters are spatially autocorrelated the assumption that the clusters are independently random is invalid. In this research, a product moment correlation coefficient (i.e., the Moran's coefficient) was used to quantify local spatial variation in multiple clinical and environmental predictor variables sampled in San Juan de Lurigancho, Lima, Peru. Initially, QuickBird 0.61 m data, encompassing visible bands and the near infra-red bands, were selected to synthesize images of land cover attributes of the study site. Data of residential addresses of individual patients with smear-positive MDR-TB were geocoded, prevalence rates calculated and then digitally overlaid onto the satellite data within a 2 km buffer of 31 georeferenced health centers, using a 10 m2 grid-based algorithm. Geographical information system (GIS)-gridded measurements of each health center were generated based on preliminary base maps of the georeferenced data aggregated to block groups and census tracts within each buffered area. A three-dimensional model of the study site was constructed based on a digital elevation model (DEM) to determine terrain covariates associated with the sampled MDR-TB covariates. Pearson's correlation was used to evaluate the linear relationship between the DEM and the sampled MDR-TB data. A SAS/GIS(R) module was then used to calculate univariate statistics and to perform linear and non-linear regression analyses using the sampled predictor variables. The estimates generated from a global autocorrelation analyses were then spatially decomposed into empirical orthogonal bases using a negative binomial regression with a non-homogeneous mean. Results of the DEM analyses indicated a statistically non-significant, linear relationship between georeferenced health centers and the sampled covariate elevation. The data exhibited positive spatial autocorrelation and the decomposition of Moran's coefficient into uncorrelated, orthogonal map pattern components revealed global spatial heterogeneities necessary to capture latent autocorrelation in the MDR-TB model. It was thus shown that Poisson regression analyses and spatial eigenvector mapping can elucidate the mechanics of MDR-TB transmission by prioritizing clinical and environmental-sampled predictor variables for identifying high risk populations.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spogen, L.R.; Cleland, L.L.
An approach to the development of performance based regulations (PBR's) is described. Initially, a framework is constructed that consists of a function hierarchy and associated measures. The function at the top of the hierarchy is described in terms of societal objectives. Decomposition of this function into subordinate functions and their subsequent decompositions yield the function hierarchy. ''Bottom'' functions describe the roles of system components. When measures are identified for the performance of each function and means of aggregating performances to higher levels are established, the framework may be employed for developing PBR's. Consideration of system flexibility and performance uncertainty guidemore » in determining the hierarchical level at which regulations are formulated. Ease of testing compliance is also a factor. To show the viability of the approach, the framework developed by Lawrence Livermore Laboratory for the Nuclear Regulatory Commission for evaluation of material control systems at fixed facilities is presented.« less
CROSS-SCALE CORRELATIONS AND THE DESIGN AND ANALYSIS OF AVIAN HABITAT SELECTION STUDIES
It has long been suggested that birds select habitat hierarchically, progressing from coarser to finer spatial scales. This hypothesis, in conjunction with the realization that many organisms likely respond to environmental patterns at multiple spatial scales, has led to a large ...
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks
Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.
2014-01-01
Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877
Reducing the complexity of the software design process with object-oriented design
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Designing software is a complex process. How object-oriented design (OOD), coupled with formalized documentation and tailored object diagraming techniques, can reduce the complexity of the software design process is described and illustrated. The described OOD methodology uses a hierarchical decomposition approach in which parent objects are decomposed into layers of lower level child objects. A method of tracking the assignment of requirements to design components is also included. Increases in the reusability, portability, and maintainability of the resulting products are also discussed. This method was built on a combination of existing technology, teaching experience, consulting experience, and feedback from design method users. The discussed concepts are applicable to hierarchal OOD processes in general. Emphasis is placed on improving the design process by documenting the details of the procedures involved and incorporating improvements into those procedures as they are developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Dong; Xu, RuiXue; Zheng, Xiao, E-mail: xz58@ustc.edu.cn
2015-03-14
Several recent advancements for the hierarchical equations of motion (HEOM) approach are reported. First, we propose an a priori estimate for the optimal number of basis functions for the reservoir memory decomposition. Second, we make use of the sparsity of auxiliary density operators (ADOs) and propose two ansatzs to screen out all the intrinsic zero ADO elements. Third, we propose a new truncation scheme by utilizing the time derivatives of higher-tier ADOs. These novel techniques greatly reduce the memory cost of the HEOM approach, and thus enhance its efficiency and applicability. The improved HEOM approach is applied to simulate themore » coherent dynamics of Aharonov–Bohm double quantum dot interferometers. Quantitatively accurate dynamics is obtained for both noninteracting and interacting quantum dots. The crucial role of the quantum phase for the magnitude of quantum coherence and quantum entanglement is revealed.« less
Law, Jane
2016-01-01
Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147
NASA Astrophysics Data System (ADS)
Wang, Shaojun; Jiang, Lan; Han, Weina; Hu, Jie; Li, Xiaowei; Wang, Qingsong; Lu, Yongfeng
2018-05-01
We realize hierarchical laser-induced periodic surface structures (LIPSSs) on the surface of a ZnO thin film in a single step by the irradiation of femtosecond laser pulses. The structures are characterized by the high-spatial-frequency LIPSSs (HSFLs) formed on the abnormal bumped low-spatial-frequency LIPSSs (LSFLs). Localized electric-field enhancement based on the initially formed LSFLs is proposed as a potential mechanism for the formation of HSFLs. The simulation results through the finite-difference time-domain method show good agreement with experiments. Furthermore, the crucial role of the LSFLs in the formation of HSFLs is validated by an elaborate experimental design with preprocessed HSFLs.
NASA Astrophysics Data System (ADS)
Mao, J.; Chen, N.; Harmon, M. E.; Li, Y.; Cao, X.; Chappell, M.
2012-12-01
Advanced 13C solid-state NMR techniques were employed to study the chemical structural changes of litter decomposition across broad spatial and long time scales. The fresh and decomposed litter samples of four species (Acer saccharum (ACSA), Drypetes glauca (DRGL), Pinus resinosa (PIRE), and Thuja plicata (THPL)) incubated for up to 10 years at four sites under different climatic conditions (from Arctic to tropical forest) were examined. Decomposition generally led to an enrichment of cutin and surface wax materials, and a depletion of carbohydrates causing overall composition to become more similar compared with original litters. However, the changes of main constituents in the four litters were inconsistent with the four litters following different pathways of decomposition at the same site. As decomposition proceeded, waxy materials decreased at the early stage and then gradually increased in PIRE; DRGL showed a significant depletion of lignin and tannin while the changes of lignin and tannin were relative small and inconsistent for ACSA and THPL. In addition, the NCH groups, which could be associated with either fungal cell wall chitin or bacterial wall petidoglycan, were enriched in all litters except THPL. Contrary to the classic lignin-enrichment hypothesis, DRGL with low-quality C substrate had the highest degree of composition changes. Furthermore, some samples had more "advanced" compositional changes in the intermediate stage of decomposition than in the highly-decomposed stage. This pattern might be attributed to the formation of new cross-linking structures, that rendered substrates more complex and difficult for enzymes to attack. Finally, litter quality overrode climate and time factors as a control of long-term changes of chemical composition.
Charles H. (Hobie) Perry; Kevin J. Horn; R. Quinn Thomas; Linda H. Pardo; Erica A.H. Smithwick; Doug Baldwin; Gregory B. Lawrence; Scott W. Bailey; Sabine Braun; Christopher M. Clark; Mark Fenn; Annika Nordin; Jennifer N. Phelan; Paul G. Schaberg; Sam St. Clair; Richard Warby; Shaun Watmough; Steven S. Perakis
2015-01-01
The abundance of temporally and spatially consistent Forest Inventory and Analysis data facilitates hierarchical/multilevel analysis to investigate factors affecting tree growth, scaling from plot-level to continental scales. Herein we use FIA tree and soil inventories in conjunction with various spatial climate and soils data to estimate species-specific responses of...
A Village Study with Middle School Spatial Organisation.
ERIC Educational Resources Information Center
Mitchell, El
1985-01-01
Demonstrates how elements of a built environment can be introduced to middle school students. Describes activities that address the concept of spatial organziation in a small scale urban environment, suggesting that hierarchical arrangements of settlements, the central place theory, and land use zoning can be taught at the elementary level. (ML)
Managing the world’s largest and complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that are comparable across the region. To meet such a need, we developed a hierarchi...
A structure adapted multipole method for electrostatic interactions in protein dynamics
NASA Astrophysics Data System (ADS)
Niedermeier, Christoph; Tavan, Paul
1994-07-01
We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
On the use of a PM2.5 exposure simulator to explain birthweight
Berrocal, Veronica J.; Gelfand, Alan E.; Holland, David M.; Burke, Janet; Miranda, Marie Lynn
2010-01-01
In relating pollution to birth outcomes, maternal exposure has usually been described using monitoring data. Such characterization provides a misrepresentation of exposure as it (i) does not take into account the spatial misalignment between an individual’s residence and monitoring sites, and (ii) it ignores the fact that individuals spend most of their time indoors and typically in more than one location. In this paper, we break with previous studies by using a stochastic simulator to describe personal exposure (to particulate matter) and then relate simulated exposures at the individual level to the health outcome (birthweight) rather than aggregating to a selected spatial unit. We propose a hierarchical model that, at the first stage, specifies a linear relationship between birthweight and personal exposure, adjusting for individual risk factors and introduces random spatial effects for the census tract of maternal residence. At the second stage, our hierarchical model specifies the distribution of each individual’s personal exposure using the empirical distribution yielded by the stochastic simulator as well as a model for the spatial random effects. We have applied our framework to analyze birthweight data from 14 counties in North Carolina in years 2001 and 2002. We investigate whether there are certain aspects and time windows of exposure that are more detrimental to birthweight by building different exposure metrics which we incorporate, one by one, in our hierarchical model. To assess the difference in relating ambient exposure to birthweight versus personal exposure to birthweight, we compare estimates of the effect of air pollution obtained from hierarchical models that linearly relate ambient exposure and birthweight versus those obtained from our modeling framework. Our analysis does not show a significant effect of PM2.5 on birthweight for reasons which we discuss. However, our modeling framework serves as a template for analyzing the relationship between personal exposure and longer term health endpoints. PMID:21691413
Community turnover of wood-inhabiting fungi across hierarchical spatial scales.
Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel
2014-01-01
For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence.
Community Turnover of Wood-Inhabiting Fungi across Hierarchical Spatial Scales
Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel
2014-01-01
For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence. PMID:25058128
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Kinetics of the cellular decomposition of supersaturated solid solutions
NASA Astrophysics Data System (ADS)
Ivanov, M. A.; Naumuk, A. Yu.
2014-09-01
A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.
Brand, John; Johnson, Aaron P
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
Brand, John; Johnson, Aaron P.
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675
NASA Astrophysics Data System (ADS)
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, Anthony D.; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong
2015-12-01
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr-1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil RH with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yujie; Yang, Jinyan; Zhuang, Qianlai
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here in this study we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbialmore » dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO 2 efflux (R H) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil R H (7.5 ± 2.4 PgCyr -1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil R H with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.« less
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, A. David; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong
2015-01-01
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr−1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4–0.6) in the simulated spatial pattern of soil RHwith both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
Hierarchical spatial models for predicting tree species assemblages across large domains
Andrew O. Finley; Sudipto Banerjee; Ronald E. McRoberts
2009-01-01
Spatially explicit data layers of tree species assemblages, referred to as forest types or forest type groups, are a key component in large-scale assessments of forest sustainability, biodiversity, timber biomass, carbon sinks and forest health monitoring. This paper explores the utility of coupling georeferenced national forest inventory (NFI) data with readily...
Modeling stream network-scale variation in Coho salmon overwinter survival and smolt size
Joseph L. Ebersole; Mike E. Colvin; Parker J. Wigington; Scott G. Leibowitz; Joan P. Baker; Jana E. Compton; Bruce A. Miller; Michael A. Carins; Bruce P. Hansen; Henry R. La Vigne
2009-01-01
We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over 3 years. Contributing basin area explained the majority of spatial...
Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data
Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...
2016-10-02
Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less
Systems view on spatial planning and perception based on invariants in agent-environment dynamics
Mettler, Bérénice; Kong, Zhaodan; Li, Bin; Andersh, Jonathan
2015-01-01
Modeling agile and versatile spatial behavior remains a challenging task, due to the intricate coupling of planning, control, and perceptual processes. Previous results have shown that humans plan and organize their guidance behavior by exploiting patterns in the interactions between agent or organism and the environment. These patterns, described under the concept of Interaction Patterns (IPs), capture invariants arising from equivalences and symmetries in the interaction with the environment, as well as effects arising from intrinsic properties of human control and guidance processes, such as perceptual guidance mechanisms. The paper takes a systems' perspective, considering the IP as a unit of organization, and builds on its properties to present a hierarchical model that delineates the planning, control, and perceptual processes and their integration. The model's planning process is further elaborated by showing that the IP can be abstracted, using spatial time-to-go functions. The perceptual processes are elaborated from the hierarchical model. The paper provides experimental support for the model's ability to predict the spatial organization of behavior and the perceptual processes. PMID:25628524
The Voronoi spatio-temporal data structure
NASA Astrophysics Data System (ADS)
Mioc, Darka
2002-04-01
Current GIS models cannot integrate the temporal dimension of spatial data easily. Indeed, current GISs do not support incremental (local) addition and deletion of spatial objects, and they can not support the temporal evolution of spatial data. Spatio-temporal facilities would be very useful in many GIS applications: harvesting and forest planning, cadastre, urban and regional planning, and emergency planning. The spatio-temporal model that can overcome these problems is based on a topological model---the Voronoi data structure. Voronoi diagrams are irregular tessellations of space, that adapt to spatial objects and therefore they are a synthesis of raster and vector spatial data models. The main advantage of the Voronoi data structure is its local and sequential map updates, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define the complex operations. This resulted in a new formal model for spatio-temporal change representation, where each update is uniquely characterized by the numbers of newly created and inactivated Voronoi regions. This is used for the extension of the model towards the hierarchical Voronoi data structure. In this model, spatio-temporal changes induced by map updates are preserved in a hierarchical data structure that combines events and corresponding changes in topology. This hierarchical Voronoi data structure has an implicit time ordering of events visible through changes in topology, and it is equivalent to an event structure that can support temporal data without precise temporal information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.
The formal verification of generic interpreters
NASA Technical Reports Server (NTRS)
Windley, P.; Levitt, K.; Cohen, G. C.
1991-01-01
The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.
Dynamic mode decomposition for plasma diagnostics and validation.
Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Dynamic mode decomposition for plasma diagnostics and validation
NASA Astrophysics Data System (ADS)
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
Spatial occupancy models for large data sets
Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.
2013-01-01
Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.
Yu, Qingzhao; Li, Bin; Scribner, Richard Allen
2009-06-30
Previous studies have suggested a link between alcohol outlets and assaults. In this paper, we explore the effects of alcohol availability on assaults at the census tract level over time. In addition, we use a natural experiment to check whether a sudden loss of alcohol outlets is associated with deeper decreasing in assault violence. Several features of the data raise statistical challenges: (1) the association between covariates (for example, the alcohol outlet density of each census tract) and the assault rates may be complex and therefore cannot be described using a linear model without covariates transformation, (2) the covariates may be highly correlated with each other, (3) there are a number of observations that have missing inputs, and (4) there is spatial association in assault rates at the census tract level. We propose a hierarchical additive model, where the nonlinear correlations and the complex interaction effects are modeled using the multiple additive regression trees and the residual spatial association in the assault rates that cannot be explained in the model are smoothed using a conditional autoregressive (CAR) method. We develop a two-stage algorithm that connects the nonparametric trees with CAR to look for important covariates associated with the assault rates, while taking into account the spatial association of assault rates in adjacent census tracts. The proposed method is applied to the Los Angeles assault data (1990-1999). To assess the efficiency of the method, the results are compared with those obtained from a hierarchical linear model. Copyright (c) 2009 John Wiley & Sons, Ltd.
Masciullo, Cecilia; Dell'Anna, Rossana; Tonazzini, Ilaria; Böettger, Roman; Pepponi, Giancarlo; Cecchini, Marco
2017-10-12
Periodic ripples are a variety of anisotropic nanostructures that can be realized by ion beam irradiation on a wide range of solid surfaces. Only a few authors have investigated these surfaces for tuning the response of biological systems, probably because it is challenging to directly produce them in materials that well sustain long-term cellular cultures. Here, hierarchical rippled nanotopographies with a lateral periodicity of ∼300 nm are produced from a gold-irradiated germanium mold in polyethylene terephthalate (PET), a biocompatible polymer approved by the US Food and Drug Administration for clinical applications, by a novel three-step embossing process. The effects of nano-ripples on Schwann Cells (SCs) are studied in view of their possible use for nerve-repair applications. The data demonstrate that nano-ripples can enhance short-term SC adhesion and proliferation (3-24 h after seeding), drive their actin cytoskeleton spatial organization and sustain long-term cell growth. Notably, SCs are oriented perpendicularly with respect to the nanopattern lines. These results provide information about the possible use of hierarchical nano-rippled elements for nerve-regeneration protocols.
Thogmartin, W.E.; Knutson, M.G.
2007-01-01
Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
NASA Astrophysics Data System (ADS)
Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo
2012-02-01
We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.
Incidental Learning of Links during Navigation: The Role of Visuo-Spatial Capacity
ERIC Educational Resources Information Center
Rouet, Jean-Francois; Voros, Zsofia; Pleh, Csaba
2012-01-01
We investigated the impact of readers' visuo-spatial (VS) capacity on their incidental learning of page links during the exploration of simple hierarchical hypertextual documents. Forty-three university students were asked to explore a series of hypertexts for a limited period of time. Then the participants were asked to recall the layout and the…
Susan E. Gresens; Kenneth T. Belt; Jamie A. Tang; Daniel C. Gwinn; Patricia A. Banks
2007-01-01
In a longitudinal study of two streams whose lower reaches received unattenuated urban stormwater runoff, physical disturbance by stormflow was less important than the persistant unidentified chemical impacts of urban stormwater in limiting the distribution of Chironomidae, and Ephemeroptera, Trichoptera and Plecoptera (EPT). A hierarchical spatial analysis showed that...
NASA Astrophysics Data System (ADS)
Jung, Tobias
In 1922, Franz Selety, university-bred philosopher and self-educated physicist and cosmologist, developed a molecular hierarchical, spatially infinite, Newtonian cosmological model. His considerations were based on his earlier philosophical work published in 1914 as well as on the early correspondence with Einstein in 1917. Historically, the roots of hierarchical models can be seen in 18th century investigations by Thomas Wright of Durham, Immanuel Kant and Johann Heinrich Lambert. Those investigations were taken up by Edmund Fournier d'Albe and Carl Charlier at the beginning of the 20th century. Selety's cosmological model was criticized by Einstein mainly due to its spatial infiniteness which in Einstein's opinion seemed to contradict Mach's principle. This criticism sheds light on Einstein's conviction that with his first cosmological model, namely the static, spatially infinite, though unbounded Einstein Universe of 1917, the appropriate cosmological theory already had been established.
Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts
2013-09-30
proof-of-concept results comparing a BHM surface wind ensemble with the increments in the surface momentum flux control vector in a four-dimensional...Surface Momentum Flux Ensembles from Summaries of BHM Winds (Mediterranean) include ocean current effect Td...Bayesian Hierarchical Model to provide surface momentum flux ensembles. 3 Figure 2: Domain of interest : squares indicate spatial locations where
Hagerty, Christina H; Anderson, Nicole P; Mundt, Christopher C
2017-03-01
Fungicide resistance can cause disease control failure in agricultural systems, and is particularly concerning with Zymoseptoria tritici, the causal agent of Septoria tritici blotch of wheat. In North America, the first quinone outside inhibitor resistance in Z. tritici was discovered in the Willamette Valley of Oregon in 2012, which prompted this hierarchical survey of commercial winter wheat fields to monitor azoxystrobin- and propiconazole-resistant Z. tritici. Surveys were conducted in June 2014, January 2015, May 2015, and January 2016. The survey was organized in a hierarchical scheme: regions within the Willamette Valley, fields within the region, transects within the field, and samples within the transect. Overall, frequency of azoxystrobin-resistant isolates increased from 63 to 93% from June 2014 to January 2016. Resistance to azoxystrobin increased over time even within fields receiving no strobilurin applications. Propiconazole sensitivity varied over the course of the study but, overall, did not significantly change. Sensitivity to both fungicides showed no regional aggregation within the Willamette Valley. Greater than 80% of spatial variation in fungicide sensitivity was at the smallest hierarchical scale (within the transect) of the survey for both fungicides, and the resistance phenotypes were randomly distributed within sampled fields. Results suggest a need for a better understanding of the dynamics of fungicide resistance at the landscape level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, J.P., E-mail: chengjp@zju.edu.cn; Chen, X.; Ma, R.
Flower-like Co{sub 3}O{sub 4} hierarchical microspheres composed of self-assembled porous nanoplates have been prepared by a two-step method without employing templates. The first step involves the synthesis of flower-like Co(OH){sub 2} microspheres by a solution route at low temperatures. The second step includes the calcination of the as-prepared Co(OH){sub 2} microspheres at 200 deg. C for 1 h, causing their decomposition to form porous Co{sub 3}O{sub 4} microspheres without destruction of their original morphology. The samples were characterized by scanning electron microscope, transmission electron microscope, X-ray diffractormeter and Fourier transform infrared spectroscope. Some experimental factors including solution temperature and surfactantmore » on the morphologies of the final products have been investigated. The magnetic properties of Co{sub 3}O{sub 4} microspheres were also investigated. - Graphical Abstract: Flower-like Co{sub 3}O{sub 4} microspheres are composed of self-assembled nanoplates and these nanoplates appear to be closely packed in the microspheres. These nanoplates consist of a large number of nanocrystallites less than 5 nm in size with a porous structure, in which the connection between nanocrystallites is random. Research Highlights: {yields} Flower-like Co{sub 3}O{sub 4} hierarchical microspheres composed of self-assembled porous nanoplates have been prepared by a two-step method without employing templates. {yields} Layered Co(OH){sub 2} microspheres were prepared with an appropriate approach under low temperatures for 1 h reaction. {yields} Calcination caused Co(OH){sub 2} decomposition to form porous Co{sub 3}O{sub 4} microspheres without destruction of their original morphology.« less
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
NASA Astrophysics Data System (ADS)
Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.
2011-06-01
Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.
Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
Unifying models of dialect spread and extinction using surface tension dynamics
2018-01-01
We provide a unified mathematical explanation of two classical forms of spatial linguistic spread. The wave model describes the radiation of linguistic change outwards from a central focus. Changes can also jump between population centres in a process known as hierarchical diffusion. It has recently been proposed that the spatial evolution of dialects can be understood using surface tension at linguistic boundaries. Here we show that the inclusion of long-range interactions in the surface tension model generates both wave-like spread, and hierarchical diffusion, and that it is surface tension that is the dominant effect in deciding the stable distribution of dialect patterns. We generalize the model to allow population mixing which can induce shrinkage of linguistic domains, or destroy dialect regions from within. PMID:29410847
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Ruixue; Chen, Kezheng, E-mail: dxb@sdu.edu.cn; Liao, Zhongmiao
Highlights: ► Hydroxyapatite hierarchical microstructures have been synthesized by a facile method. ► The morphology and size of the building units of 3D structures can be controlled. ► The hydroxyapatite with 3D structure is morphologically and structurally stable up to 800 °C. - Abstract: Hydroxyapatite (HAp) hierarchical microstructures with novel 3D morphology were prepared through a template- and surfactant-free hydrothermal homogeneous precipitation method. Field emission scanning electron microscopy (FESEM), high-resolution transmission electron microscopy (HRTEM), and X-ray diffraction (XRD) were used to characterize the morphology and composition of the synthesized products. Interestingly, the obtained HAp with 3D structure is composed ofmore » one-dimensional (1D) nanorods or two-dimensional (2D) nanoribbons, and the length and morphology of these building blocks can be controlled through controlling the pH of the reaction. The building blocks are single crystalline and have different preferential orientation growth under different pH conditions. At low pH values, octacalcium phosphate (OCP) phase formed first and then transformed into HAp phase due to the increased pH value caused by the decomposition of urea. The investigation on the thermal stability reveals that the prepared HAp hierarchical microstructures are morphologically and structurally stable up to 800 °C.« less
Quantifying loopy network architectures.
Katifori, Eleni; Magnasco, Marcelo O
2012-01-01
Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
Hierarchical clustering of EMD based interest points for road sign detection
NASA Astrophysics Data System (ADS)
Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza
2014-04-01
This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
Sánchez, Óscar J; Cardona, Carlos A
2012-01-01
In this work, the hierarchical decomposition methodology was used to conceptually design the production of fuel ethanol from sugarcane. The decomposition of the process into six levels of analysis was carried out. Several options of technological configurations were assessed in each level considering economic and environmental criteria. The most promising alternatives were chosen rejecting the ones with a least favorable performance. Aspen Plus was employed for simulation of each one of the technological configurations studied. Aspen Icarus was used for economic evaluation of each configuration, and WAR algorithm was utilized for calculation of the environmental criterion. The results obtained showed that the most suitable synthesized flowsheet involves the continuous cultivation of Zymomonas mobilis with cane juice as substrate and including cell recycling and the ethanol dehydration by molecular sieves. The proposed strategy demonstrated to be a powerful tool for conceptual design of biotechnological processes considering both techno-economic and environmental indicators. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
Microbial ecological succession during municipal solid waste decomposition.
Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A
2018-04-28
The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.
Cai, Andong; Liang, Guopeng; Zhang, Xubo; Zhang, Wenju; Li, Ling; Rui, Yichao; Xu, Minggang; Luo, Yiqi
2018-05-01
Understanding drivers of straw decomposition is essential for adopting appropriate management practice to improve soil fertility and promote carbon (C) sequestration in agricultural systems. However, predicting straw decomposition and characteristics is difficult because of the interactions between many factors related to straw properties, soil properties, and climate, especially under future climate change conditions. This study investigated the driving factors of straw decomposition of six types of crop straw including wheat, maize, rice, soybean, rape, and other straw by synthesizing 1642 paired data from 98 published papers at spatial and temporal scales across China. All the data derived from the field experiments using little bags over twelve years. Overall, despite large differences in climatic and soil properties, the remaining straw carbon (C, %) could be accurately represented by a three-exponent equation with thermal time (accumulative temperature). The lignin/nitrogen and lignin/phosphorus ratios of straw can be used to define the size of labile, intermediate, and recalcitrant C pool. The remaining C for an individual type of straw in the mild-temperature zone was higher than that in the warm-temperature and subtropical zone within one calendar year. The remaining straw C after one thermal year was 40.28%, 37.97%, 37.77%, 34.71%, 30.87%, and 27.99% for rice, soybean, rape, wheat, maize, and other straw, respectively. Soil available nitrogen and phosphorus influenced the remaining straw C at different decomposition stages. For one calendar year, the total amount of remaining straw C was estimated to be 29.41 Tg and future temperature increase of 2 °C could reduce the remaining straw C by 1.78 Tg. These findings confirmed the long-term straw decomposition could be mainly driven by temperature and straw quality, and quantitatively predicted by thermal time with the three-exponent equation for a wide array of straw types at spatial and temporal scales in agro-ecosystems of China. Copyright © 2018 Elsevier B.V. All rights reserved.
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.
When mechanism matters: Bayesian forecasting using models of ecological diffusion
Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.
2017-01-01
Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wei; Wang, Jin, E-mail: jin.wang.1@stonybrook.edu; State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, 130022 Changchun, China and College of Physics, Jilin University, 130021 Changchun
We have established a general non-equilibrium thermodynamic formalism consistently applicable to both spatially homogeneous and, more importantly, spatially inhomogeneous systems, governed by the Langevin and Fokker-Planck stochastic dynamics with multiple state transition mechanisms, using the potential-flux landscape framework as a bridge connecting stochastic dynamics with non-equilibrium thermodynamics. A set of non-equilibrium thermodynamic equations, quantifying the relations of the non-equilibrium entropy, entropy flow, entropy production, and other thermodynamic quantities, together with their specific expressions, is constructed from a set of dynamical decomposition equations associated with the potential-flux landscape framework. The flux velocity plays a pivotal role on both the dynamic andmore » thermodynamic levels. On the dynamic level, it represents a dynamic force breaking detailed balance, entailing the dynamical decomposition equations. On the thermodynamic level, it represents a thermodynamic force generating entropy production, manifested in the non-equilibrium thermodynamic equations. The Ornstein-Uhlenbeck process and more specific examples, the spatial stochastic neuronal model, in particular, are studied to test and illustrate the general theory. This theoretical framework is particularly suitable to study the non-equilibrium (thermo)dynamics of spatially inhomogeneous systems abundant in nature. This paper is the second of a series.« less
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery
NASA Astrophysics Data System (ADS)
Mahdianpari, Masoud; Salehi, Bahram; Mohammadimanesh, Fariba; Motagh, Mahdi
2017-08-01
Wetlands are important ecosystems around the world, although they are degraded due both to anthropogenic and natural process. Newfoundland is among the richest Canadian province in terms of different wetland classes. Herbaceous wetlands cover extensive areas of the Avalon Peninsula, which are the habitat of a number of animal and plant species. In this study, a novel hierarchical object-based Random Forest (RF) classification approach is proposed for discriminating between different wetland classes in a sub-region located in the north eastern portion of the Avalon Peninsula. Particularly, multi-polarization and multi-frequency SAR data, including X-band TerraSAR-X single polarized (HH), L-band ALOS-2 dual polarized (HH/HV), and C-band RADARSAT-2 fully polarized images, were applied in different classification levels. First, a SAR backscatter analysis of different land cover types was performed by training data and used in Level-I classification to separate water from non-water classes. This was followed by Level-II classification, wherein the water class was further divided into shallow- and deep-water classes, and the non-water class was partitioned into herbaceous and non-herbaceous classes. In Level-III classification, the herbaceous class was further divided into bog, fen, and marsh classes, while the non-herbaceous class was subsequently partitioned into urban, upland, and swamp classes. In Level-II and -III classifications, different polarimetric decomposition approaches, including Cloude-Pottier, Freeman-Durden, Yamaguchi decompositions, and Kennaugh matrix elements were extracted to aid the RF classifier. The overall accuracy and kappa coefficient were determined in each classification level for evaluating the classification results. The importance of input features was also determined using the variable importance obtained by RF. It was found that the Kennaugh matrix elements, Yamaguchi, and Freeman-Durden decompositions were the most important parameters for wetland classification in this study. Using this new hierarchical RF classification approach, an overall accuracy of up to 94% was obtained for classifying different land cover types in the study area.
Hierarchical decomposition of burn body diagram based on cutaneous functional units and its utility.
Richard, Reg; Jones, John A; Parshley, Philip
2015-01-01
A burn body diagram (BBD) is a common feature used in the delivery of burn care for estimating the TBSA burn as well as calculating fluid resuscitation and nutritional requirements, wound healing, and rehabilitation intervention. However, little change has occurred for over seven decades in the configuration of the BBD. The purpose of this project was to develop a computerized model using hierarchical decomposition (HD) to more precisely determine the percentage burn within a BBD based on cutaneous functional units (CFUs). HD is a process by which a system is degraded into smaller parts that are more precise in their use. CFUs were previously identified fields of the skin involved in the range of motion. A standard Lund/Browder (LB) BBD template was used as the starting point to apply the CFU segments. LB body divisions were parceled down into smaller body area divisions through a HD process based on the CFU concept. A numerical pattern schema was used to label the various segments in a cephalo/caudal, anterior/posterior, medial/lateral manner. Hand/fingers were divided based on anatomical landmarks and known cutaneokinematic function. The face was considered using aesthetic units. Computer code was written to apply the numeric hierarchical schema to CFUs and applied within the context of the surface area graphic evaluation BBD program. Each segmented CFU was coded to express 100% of itself. The CFU/HD method refined the standard LB diagram from 13 body segments and 33 subdivisions into 182 isolated CFUs. Associated CFUs were reconstituted into 219 various surface area combinations totaling 401 possible surface segments. The CFU/HD schema of the body surface mapping is applicable to measuring and calculating percent wound healing in a more precise manner. It eliminates subjective assessment of the percentage wound healing and the need for additional devices such as planimetry. The development of CFU/HD body mapping schema has rendered a technologically advanced system to depict body burns. The process has led to a more precise estimation of the segmented body areas while preserving the overall TBSA information. Clinical application to date has demonstrated its worthwhile utility.
Spatial patterns of breeding success of grizzly bears derived from hierarchical multistate models.
Fisher, Jason T; Wheatley, Matthew; Mackenzie, Darryl
2014-10-01
Conservation programs often manage populations indirectly through the landscapes in which they live. Empirically, linking reproductive success with landscape structure and anthropogenic change is a first step in understanding and managing the spatial mechanisms that affect reproduction, but this link is not sufficiently informed by data. Hierarchical multistate occupancy models can forge these links by estimating spatial patterns of reproductive success across landscapes. To illustrate, we surveyed the occurrence of grizzly bears (Ursus arctos) in the Canadian Rocky Mountains Alberta, Canada. We deployed camera traps for 6 weeks at 54 surveys sites in different types of land cover. We used hierarchical multistate occupancy models to estimate probability of detection, grizzly bear occupancy, and probability of reproductive success at each site. Grizzly bear occupancy varied among cover types and was greater in herbaceous alpine ecotones than in low-elevation wetlands or mid-elevation conifer forests. The conditional probability of reproductive success given grizzly bear occupancy was 30% (SE = 0.14). Grizzly bears with cubs had a higher probability of detection than grizzly bears without cubs, but sites were correctly classified as being occupied by breeding females 49% of the time based on raw data and thus would have been underestimated by half. Repeated surveys and multistate modeling reduced the probability of misclassifying sites occupied by breeders as unoccupied to <2%. The probability of breeding grizzly bear occupancy varied across the landscape. Those patches with highest probabilities of breeding occupancy-herbaceous alpine ecotones-were small and highly dispersed and are projected to shrink as treelines advance due to climate warming. Understanding spatial correlates in breeding distribution is a key requirement for species conservation in the face of climate change and can help identify priorities for landscape management and protection. © 2014 Society for Conservation Biology.
Socio-Ecological Risk Factors for Prime-Age Adult Death in Two Coastal Areas of Vietnam
Kim, Deok Ryun; Ali, Mohammad; Thiem, Vu Dinh; Wierzba, Thomas F.
2014-01-01
Background Hierarchical spatial models enable the geographic and ecological analysis of health data thereby providing useful information for designing effective health interventions. In this study, we used a Bayesian hierarchical spatial model to evaluate mortality data in Vietnam. The model enabled identification of socio-ecological risk factors and generation of risk maps to better understand the causes and geographic implications of prime-age (15 to less than 45 years) adult death. Methods and Findings The study was conducted in two sites: Nha Trang and Hue in Vietnam. The study areas were split into 500×500 meter cells to define neighborhoods. We first extracted socio-demographic data from population databases of the two sites, and then aggregated the data by neighborhood. We used spatial hierarchical model that borrows strength from neighbors for evaluating risk factors and for creating spatially smoothed risk map after adjusting for neighborhood level covariates. The Markov chain Monte Carlo procedure was used to estimate the parameters. Male mortality was more than twice the female mortality. The rates also varied by age and sex. The most frequent cause of mortality was traffic accidents and drowning for men and traffic accidents and suicide for women. Lower education of household heads in the neighborhood was an important risk factor for increased mortality. The mortality was highly variable in space and the socio-ecological risk factors are sensitive to study site and sex. Conclusion Our study suggests that lower education of the household head is an important predictor for prime age adult mortality. Variability in socio-ecological risk factors and in risk areas by sex make it challenging to design appropriate intervention strategies aimed at decreasing prime-age adult deaths in Vietnam. PMID:24587031
Socio-ecological risk factors for prime-age adult death in two coastal areas of Vietnam.
Kim, Deok Ryun; Ali, Mohammad; Thiem, Vu Dinh; Wierzba, Thomas F
2014-01-01
Hierarchical spatial models enable the geographic and ecological analysis of health data thereby providing useful information for designing effective health interventions. In this study, we used a Bayesian hierarchical spatial model to evaluate mortality data in Vietnam. The model enabled identification of socio-ecological risk factors and generation of risk maps to better understand the causes and geographic implications of prime-age (15 to less than 45 years) adult death. The study was conducted in two sites: Nha Trang and Hue in Vietnam. The study areas were split into 500×500 meter cells to define neighborhoods. We first extracted socio-demographic data from population databases of the two sites, and then aggregated the data by neighborhood. We used spatial hierarchical model that borrows strength from neighbors for evaluating risk factors and for creating spatially smoothed risk map after adjusting for neighborhood level covariates. The Markov chain Monte Carlo procedure was used to estimate the parameters. Male mortality was more than twice the female mortality. The rates also varied by age and sex. The most frequent cause of mortality was traffic accidents and drowning for men and traffic accidents and suicide for women. Lower education of household heads in the neighborhood was an important risk factor for increased mortality. The mortality was highly variable in space and the socio-ecological risk factors are sensitive to study site and sex. Our study suggests that lower education of the household head is an important predictor for prime age adult mortality. Variability in socio-ecological risk factors and in risk areas by sex make it challenging to design appropriate intervention strategies aimed at decreasing prime-age adult deaths in Vietnam.
Hierarchical Bayesian spatial models for alcohol availability, drug "hot spots" and violent crime.
Zhu, Li; Gorman, Dennis M; Horel, Scott
2006-12-07
Ecologic studies have shown a relationship between alcohol outlet densities, illicit drug use and violence. The present study examined this relationship in the City of Houston, Texas, using a sample of 439 census tracts. Neighborhood sociostructural covariates, alcohol outlet density, drug crime density and violent crime data were collected for the year 2000, and analyzed using hierarchical Bayesian models. Model selection was accomplished by applying the Deviance Information Criterion. The counts of violent crime in each census tract were modelled as having a conditional Poisson distribution. Four neighbourhood explanatory variables were identified using principal component analysis. The best fitted model was selected as the one considering both unstructured and spatial dependence random effects. The results showed that drug-law violation explained a greater amount of variance in violent crime rates than alcohol outlet densities. The relative risk for drug-law violation was 2.49 and that for alcohol outlet density was 1.16. Of the neighbourhood sociostructural covariates, males of age 15 to 24 showed an effect on violence, with a 16% decrease in relative risk for each increase the size of its standard deviation. Both unstructured heterogeneity random effect and spatial dependence need to be included in the model. The analysis presented suggests that activity around illicit drug markets is more strongly associated with violent crime than is alcohol outlet density. Unique among the ecological studies in this field, the present study not only shows the direction and magnitude of impact of neighbourhood sociostructural covariates as well as alcohol and illicit drug activities in a neighbourhood, it also reveals the importance of applying hierarchical Bayesian models in this research field as both spatial dependence and heterogeneity random effects need to be considered simultaneously.
Formal verification of a set of memory management units
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.
1992-01-01
This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.
NASA Technical Reports Server (NTRS)
Nashman, Marilyn; Chaconas, Karen J.
1988-01-01
The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Crimmins, Shawn M.; Walleser, Liza R.; Hertel, Dan R.; McKann, Patrick C.; Rohweder, Jason J.; Thogmartin, Wayne E.
2016-01-01
There is growing need to develop models of spatial patterns in animal abundance, yet comparatively few examples of such models exist. This is especially true in situations where the abundance of one species may inhibit that of another, such as the intensively-farmed landscape of the Prairie Pothole Region (PPR) of the central United States, where waterfowl production is largely constrained by mesocarnivore nest predation. We used a hierarchical Bayesian approach to relate the distribution of various land-cover types to the relative abundances of four mesocarnivores in the PPR: coyote Canis latrans, raccoon Procyon lotor, red fox Vulpes vulpes, and striped skunk Mephitis mephitis. We developed models for each species at multiple spatial resolutions (41.4 km2, 10.4 km2, and 2.6 km2) to address different ecological and management-related questions. Model results for each species were similar irrespective of resolution. We found that the amount of row-crop agriculture was nearly ubiquitous in our best models, exhibiting a positive relationship with relative abundance for each species. The amount of native grassland land-cover was positively associated with coyote and raccoon relative abundance, but generally absent from models for red fox and skunk. Red fox and skunk were positively associated with each other, suggesting potential niche overlap. We found no evidence that coyote abundance limited that of other mesocarnivore species, as might be expected under a hypothesis of mesopredator release. The relationships between relative abundance and land-cover types were similar across spatial resolutions. Our results indicated that mesocarnivores in the PPR are most likely to occur in portions of the landscape with large amounts of agricultural land-cover. Further, our results indicated that track-survey data can be used in a hierarchical framework to gain inferences regarding spatial patterns in animal relative abundance.
Coates, Peter S.; Prochazka, Brian G.; Ricca, Mark A.; Wann, Gregory T.; Aldridge, Cameron L.; Hanser, Steven E.; Doherty, Kevin E.; O'Donnell, Michael S.; Edmunds, David R.; Espinosa, Shawn P.
2017-08-10
Population ecologists have long recognized the importance of ecological scale in understanding processes that guide observed demographic patterns for wildlife species. However, directly incorporating spatial and temporal scale into monitoring strategies that detect whether trajectories are driven by local or regional factors is challenging and rarely implemented. Identifying the appropriate scale is critical to the development of management actions that can attenuate or reverse population declines. We describe a novel example of a monitoring framework for estimating annual rates of population change for greater sage-grouse (Centrocercus urophasianus) within a hierarchical and spatially nested structure. Specifically, we conducted Bayesian analyses on a 17-year dataset (2000–2016) of lek counts in Nevada and northeastern California to estimate annual rates of population change, and compared trends across nested spatial scales. We identified leks and larger scale populations in immediate need of management, based on the occurrence of two criteria: (1) crossing of a destabilizing threshold designed to identify significant rates of population decline at a particular nested scale; and (2) crossing of decoupling thresholds designed to identify rates of population decline at smaller scales that decouple from rates of population change at a larger spatial scale. This approach establishes how declines affected by local disturbances can be separated from those operating at larger scales (for example, broad-scale wildfire and region-wide drought). Given the threshold output from our analysis, this adaptive management framework can be implemented readily and annually to facilitate responsive and effective actions for sage-grouse populations in the Great Basin. The rules of the framework can also be modified to identify populations responding positively to management action or demonstrating strong resilience to disturbance. Similar hierarchical approaches might be beneficial for other species occupying landscapes with heterogeneous disturbance and climatic regimes.
NASA Astrophysics Data System (ADS)
Lafare, Antoine E. A.; Peach, Denis W.; Hughes, Andrew G.
2016-02-01
The daily groundwater level (GWL) response in the Permo-Triassic Sandstone aquifers in the Eden Valley, England (UK), has been studied using the seasonal trend decomposition by LOESS (STL) technique. The hydrographs from 18 boreholes in the Permo-Triassic Sandstone were decomposed into three components: seasonality, general trend and remainder. The decomposition was analysed first visually, then using tools involving a variance ratio, time-series hierarchical clustering and correlation analysis. Differences and similarities in decomposition pattern were explained using the physical and hydrogeological information associated with each borehole. The Penrith Sandstone exhibits vertical and horizontal heterogeneity, whereas the more homogeneous St Bees Sandstone groundwater hydrographs characterize a well-identified seasonality; however, exceptions can be identified. A stronger trend component is obtained in the silicified parts of the northern Penrith Sandstone, while the southern Penrith, containing Brockram (breccias) Formation, shows a greater relative variability of the seasonal component. Other boreholes drilled as shallow/deep pairs show differences in responses, revealing the potential vertical heterogeneities within the Penrith Sandstone. The differences in bedrock characteristics between and within the Penrith and St Bees Sandstone formations appear to influence the GWL response. The de-seasonalized and de-trended GWL time series were then used to characterize the response, for example in terms of memory effect (autocorrelation analysis). By applying the STL method, it is possible to analyse GWL hydrographs leading to better conceptual understanding of the groundwater flow. Thus, variation in groundwater response can be used to gain insight into the aquifer physical properties and understand differences in groundwater behaviour.
Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien
2018-01-01
The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576
NASA Astrophysics Data System (ADS)
Zhang, Ying; Zhou, Jiabin; Cai, Weiquan; Zhou, Jun; Li, Zhen
2018-02-01
In this study, hierarchical double-shelled NiO/ZnO hollow spheres heterojunction were prepared by calcination of the metallic organic frameworks (MOFs) as a sacrificial template in air via a one-step solvothermal method. Additionally, the photocatalytic activity of the as-prepared samples for the degradation of Rhodamine B (RhB) under UV-vis light irradiation were also investigated. NiO/ZnO microsphere comprised a core and a shell with unique hierarchically porous structure. The photocatalytic results showed that NiO/ZnO hollow spheres exhibited excellent catalytic activity for RhB degradation, causing complete decomposition of RhB (200 mL of 10 g/L) under UV-vis light irradiation within 3 h. Furthermore, the degradation pathway was proposed on the basis of the intermediates during the photodegradation process using liquid chromatography analysis coupled with mass spectroscopy (LC-MS). The improvement in photocatalytic performance could be attributed to the p-n heterojunction in the NiO/ZnO hollow spheres with hierarchically porous structure and the strong double-shell binding interaction, which enhances adsorption of the dye molecules on the catalyst surface and facilitates the electron/hole transfer within the framework. The degradation mechanism of pollutant is ascribed to the hydroxyl radicals (rad OH), which is the main oxidative species for the photocatalytic degradation of RhB. This work provides a facile and effective approach for the fabrication of porous metal oxides heterojunction with high photocatalytic activity and thus can be potentially used in the environmental purification.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Distinctive signatures of recursion.
Martins, Maurício Dias
2012-07-19
Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed.
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh
2017-10-01
An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Combination of geodetic measurements by means of a multi-resolution representation
NASA Astrophysics Data System (ADS)
Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.
2010-12-01
Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.
The Hierarchical Distribution of the Young Stellar Clusters in Six Local Star-forming Galaxies
NASA Astrophysics Data System (ADS)
Grasha, K.; Calzetti, D.; Adamo, A.; Kim, H.; Elmegreen, B. G.; Gouliermis, D. A.; Dale, D. A.; Fumagalli, M.; Grebel, E. K.; Johnson, K. E.; Kahre, L.; Kennicutt, R. C.; Messa, M.; Pellerin, A.; Ryon, J. E.; Smith, L. J.; Shabani, F.; Thilker, D.; Ubeda, L.
2017-05-01
We present a study of the hierarchical clustering of the young stellar clusters in six local (3-15 Mpc) star-forming galaxies using Hubble Space Telescope broadband WFC3/UVIS UV and optical images from the Treasury Program LEGUS (Legacy ExtraGalactic UV Survey). We identified 3685 likely clusters and associations, each visually classified by their morphology, and we use the angular two-point correlation function to study the clustering of these stellar systems. We find that the spatial distribution of the young clusters and associations are clustered with respect to each other, forming large, unbound hierarchical star-forming complexes that are in general very young. The strength of the clustering decreases with increasing age of the star clusters and stellar associations, becoming more homogeneously distributed after ˜40-60 Myr and on scales larger than a few hundred parsecs. In all galaxies, the associations exhibit a global behavior that is distinct and more strongly correlated from compact clusters. Thus, populations of clusters are more evolved than associations in terms of their spatial distribution, traveling significantly from their birth site within a few tens of Myr, whereas associations show evidence of disruption occurring very quickly after their formation. The clustering of the stellar systems resembles that of a turbulent interstellar medium that drives the star formation process, correlating the components in unbound star-forming complexes in a hierarchical manner, dispersing shortly after formation, suggestive of a single, continuous mode of star formation across all galaxies.
Paul F. Hessburg; Bradley G. Smith; R. Brion Salter
1999-01-01
Using hierarchical clustering techniques, we grouped subwatersheds on the eastern slope of the Cascade Range in Washington State into ecological subregions by similarity of area in potential vegetation and climate attributes. We then built spatially continuous historical and current vegetation maps for 48 randomly selected subwatersheds from interpretations of 1938-49...
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray
2015-10-30
The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
How does spatial extent of fMRI datasets affect independent component analysis decomposition?
Aragri, Adriana; Scarabino, Tommaso; Seifritz, Erich; Comani, Silvia; Cirillo, Sossio; Tedeschi, Gioacchino; Esposito, Fabrizio; Di Salle, Francesco
2006-09-01
Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity. (c) 2006 Wiley-Liss, Inc.
Liu, Huanjun; Huffman, Ted; Liu, Jiangui; Li, Zhe; Daneshfar, Bahram; Zhang, Xinle
2015-01-01
Understanding agricultural ecosystems and their complex interactions with the environment is important for improving agricultural sustainability and environmental protection. Developing the necessary understanding requires approaches that integrate multi-source geospatial data and interdisciplinary relationships at different spatial scales. In order to identify and delineate landscape units representing relatively homogenous biophysical properties and eco-environmental functions at different spatial scales, a hierarchical system of uniform management zones (UMZ) is proposed. The UMZ hierarchy consists of seven levels of units at different spatial scales, namely site-specific, field, local, regional, country, continent, and globe. Relatively few studies have focused on the identification of the two middle levels of units in the hierarchy, namely the local UMZ (LUMZ) and the regional UMZ (RUMZ), which prevents true eco-environmental studies from being carried out across the full range of scales. This study presents a methodology to delineate LUMZ and RUMZ spatial units using land cover, soil, and remote sensing data. A set of objective criteria were defined and applied to evaluate the within-zone homogeneity and between-zone separation of the delineated zones. The approach was applied in a farming and forestry region in southeastern Ontario, Canada, and the methodology was shown to be objective, flexible, and applicable with commonly available spatial data. The hierarchical delineation of UMZs can be used as a tool to organize the spatial structure of agricultural landscapes, to understand spatial relationships between cropping practices and natural resources, and to target areas for application of specific environmental process models and place-based policy interventions.
Safavynia, Seyed A.
2012-01-01
Recent evidence suggests that complex spatiotemporal patterns of muscle activity can be explained with a low-dimensional set of muscle synergies or M-modes. While it is clear that both spatial and temporal aspects of muscle coordination may be low dimensional, constraints on spatial versus temporal features of muscle coordination likely involve different neural control mechanisms. We hypothesized that the low-dimensional spatial and temporal features of muscle coordination are independent of each other. We further hypothesized that in reactive feedback tasks, spatially fixed muscle coordination patterns—or muscle synergies—are hierarchically recruited via time-varying neural commands based on delayed task-level feedback. We explicitly compared the ability of spatially fixed (SF) versus temporally fixed (TF) muscle synergies to reconstruct the entire time course of muscle activity during postural responses to anterior-posterior support-surface translations. While both SF and TF muscle synergies could account for EMG variability in a postural task, SF muscle synergies produced more consistent and physiologically interpretable results than TF muscle synergies during postural responses to perturbations. Moreover, a majority of SF muscle synergies were consistent in structure when extracted from epochs throughout postural responses. Temporal patterns of SF muscle synergy recruitment were well-reconstructed by delayed feedback of center of mass (CoM) kinematics and reproduced EMG activity of multiple muscles. Consistent with the idea that independent and hierarchical low-dimensional neural control structures define spatial and temporal patterns of muscle activity, our results suggest that CoM kinematics are a task variable used to recruit SF muscle synergies for feedback control of balance. PMID:21957219
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Travelling waves and spatial hierarchies in measles epidemics
NASA Astrophysics Data System (ADS)
Grenfell, B. T.; Bjørnstad, O. N.; Kappey, J.
2001-12-01
Spatio-temporal travelling waves are striking manifestations of predator-prey and host-parasite dynamics. However, few systems are well enough documented both to detect repeated waves and to explain their interaction with spatio-temporal variations in population structure and demography. Here, we demonstrate recurrent epidemic travelling waves in an exhaustive spatio-temporal data set for measles in England and Wales. We use wavelet phase analysis, which allows for dynamical non-stationarity-a complication in interpreting spatio-temporal patterns in these and many other ecological time series. In the pre-vaccination era, conspicuous hierarchical waves of infection moved regionally from large cities to small towns; the introduction of measles vaccination restricted but did not eliminate this hierarchical contagion. A mechanistic stochastic model suggests a dynamical explanation for the waves-spread via infective `sparks' from large `core' cities to smaller `satellite' towns. Thus, the spatial hierarchy of host population structure is a prerequisite for these infection waves.
Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth
2013-01-01
Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890
The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim
2008-01-01
Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.
NASA Astrophysics Data System (ADS)
Du, Shihong; Guo, Luo; Wang, Qiao; Qin, Qimin
The extended 9-intersection matrix is used to formalize topological relations between uncertain regions while it is designed to satisfy the requirements at a concept level, and to deal with the complex regions with broad boundaries (CBBRs) as a whole without considering their hierarchical structures. In contrast to simple regions with broad boundaries, CBBRs have complex hierarchical structures. Therefore, it is necessary to take into account the complex hierarchical structure and to represent the topological relations between all regions in CBBRs as a relation matrix, rather than using the extended 9-intersection matrix to determine topological relations. In this study, a tree model is first used to represent the intrinsic configuration of CBBRs hierarchically. Then, the reasoning tables are presented for deriving topological relations between child, parent and sibling regions from the relations between two given regions in CBBRs. Finally, based on the reasoning, efficient methods are proposed to compute and derive the topological relation matrix. The proposed methods can be incorporated into spatial databases to facilitate geometric-oriented applications.
The traveling salesman problem: a hierarchical model.
Graham, S M; Joshi, A; Pizlo, Z
2000-10-01
Our review of prior literature on spatial information processing in perception, attention, and memory indicates that these cognitive functions involve similar mechanisms based on a hierarchical architecture. The present study extends the application of hierarchical models to the area of problem solving. First, we report results of an experiment in which human subjects were tested on a Euclidean traveling salesman problem (TSP) with 6 to 30 cities. The subject's solutions were either optimal or near-optimal in length and were produced in a time that was, on average, a linear function of the number of cities. Next, the performance of the subjects is compared with that of five representative artificial intelligence and operations research algorithms, that produce approximate solutions for Euclidean problems. None of these algorithms was found to be an adequate psychological model. Finally, we present a new algorithm for solving the TSP, which is based on a hierarchical pyramid architecture. The performance of this new algorithm is quite similar to the performance of the subjects.
A hierarchical modeling methodology for the definition and selection of requirements
NASA Astrophysics Data System (ADS)
Dufresne, Stephane
This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the epistemic uncertainty. The proposed methodology is applied to the design of a hurricane tracker Unmanned Aerial Vehicles to demonstrate the origin and impact of requirements on the concept of operations and systems alternatives. This research demonstrates that the hierarchical modeling methodology provides a traceable flow-down of the requirements from the problem definition to the systems alternatives phases of conceptual design.
NASA Astrophysics Data System (ADS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
Hierarchical Bayesian method for mapping biogeochemical hot spots using induced polarization imaging
Wainwright, Haruko M.; Flores Orozco, Adrian; Bucker, Matthias; ...
2016-01-29
In floodplain environments, a naturally reduced zone (NRZ) is considered to be a common biogeochemical hot spot, having distinct microbial and geochemical characteristics. Although important for understanding their role in mediating floodplain biogeochemical processes, mapping the subsurface distribution of NRZs over the dimensions of a floodplain is challenging, as conventional wellbore data are typically spatially limited and the distribution of NRZs is heterogeneous. In this work, we present an innovative methodology for the probabilistic mapping of NRZs within a three-dimensional (3-D) subsurface domain using induced polarization imaging, which is a noninvasive geophysical technique. Measurements consist of surface geophysical surveys andmore » drilling-recovered sediments at the U.S. Department of Energy field site near Rifle, CO (USA). Inversion of surface time domain-induced polarization (TDIP) data yielded 3-D images of the complex electrical resistivity, in terms of magnitude and phase, which are associated with mineral precipitation and other lithological properties. By extracting the TDIP data values colocated with wellbore lithological logs, we found that the NRZs have a different distribution of resistivity and polarization from the other aquifer sediments. To estimate the spatial distribution of NRZs, we developed a Bayesian hierarchical model to integrate the geophysical and wellbore data. In addition, the resistivity images were used to estimate hydrostratigraphic interfaces under the floodplain. Validation results showed that the integration of electrical imaging and wellbore data using a Bayesian hierarchical model was capable of mapping spatially heterogeneous interfaces and NRZ distributions thereby providing a minimally invasive means to parameterize a hydrobiogeochemical model of the floodplain.« less
Zhou, Chao; Zhao, Yufei; Bian, Tong; Shang, Lu; Yu, Huijun; Wu, Li-Zhu; Tung, Chen-Ho; Zhang, Tierui
2013-10-28
Hierarchical Sn2Nb2O7 hollow spheres were prepared for the first time via a facile hydrothermal route using bubbles generated in situ from the decomposition of urea as soft templates. The as-obtained hollow spheres with a large specific surface area of 58.3 m(2) g(-1) show improved visible-light-driven photocatalytic H2 production activity in lactic acid aqueous solutions, about 4 times higher than that of the bulk Sn2Nb2O7 sample prepared by a conventional high temperature solid state reaction method.
NASA Technical Reports Server (NTRS)
Houlahan, Padraig; Scalo, John
1992-01-01
A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.
Gao, Pu-Xian; Shimpi, Paresh; Gao, Haiyong; Liu, Caihong; Guo, Yanbing; Cai, Wenjie; Liao, Kuo-Ting; Wrobel, Gregory; Zhang, Zhonghua; Ren, Zheng; Lin, Hui-Jan
2012-01-01
Composite nanoarchitectures represent a class of nanostructured entities that integrates various dissimilar nanoscale building blocks including nanoparticles, nanowires, and nanofilms toward realizing multifunctional characteristics. A broad array of composite nanoarchitectures can be designed and fabricated, involving generic materials such as metal, ceramics, and polymers in nanoscale form. In this review, we will highlight the latest progress on composite nanostructures in our research group, particularly on various metal oxides including binary semiconductors, ABO3-type perovskites, A2BO4 spinels and quaternary dielectric hydroxyl metal oxides (AB(OH)6) with diverse application potential. Through a generic template strategy in conjunction with various synthetic approaches— such as hydrothermal decomposition, colloidal deposition, physical sputtering, thermal decomposition and thermal oxidation, semiconductor oxide alloy nanowires, metal oxide/perovskite (spinel) composite nanowires, stannate based nanocompostes, as well as semiconductor heterojunction—arrays and networks have been self-assembled in large scale and are being developed as promising classes of composite nanoarchitectures, which may open a new array of advanced nanotechnologies in solid state lighting, solar absorption, photocatalysis and battery, auto-emission control, and chemical sensing. PMID:22837702
Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1991-01-01
The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.
Pantazatos, Spiro P.; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A.
2009-01-01
An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50), and precision of the semantic mapping between these terms across datasets was 98% (n = 100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets. PMID:20495688
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.
1984-01-01
This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.
Completing the land resource hierarchy
USDA-ARS?s Scientific Manuscript database
The Land Resource Hierarchy of the NRCS is a hierarchal landscape classification consisting of resource areas which represent both conceptual and spatially discrete landscape units stratifying agency programs and practices. The Land Resource Hierarchy (LRH) scales from discrete points (soil pedon an...
The Hierarchical Distribution of the Young Stellar Clusters in Six Local Star-forming Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grasha, K.; Calzetti, D.; Adamo, A.
We present a study of the hierarchical clustering of the young stellar clusters in six local (3–15 Mpc) star-forming galaxies using Hubble Space Telescope broadband WFC3/UVIS UV and optical images from the Treasury Program LEGUS (Legacy ExtraGalactic UV Survey). We identified 3685 likely clusters and associations, each visually classified by their morphology, and we use the angular two-point correlation function to study the clustering of these stellar systems. We find that the spatial distribution of the young clusters and associations are clustered with respect to each other, forming large, unbound hierarchical star-forming complexes that are in general very young. Themore » strength of the clustering decreases with increasing age of the star clusters and stellar associations, becoming more homogeneously distributed after ∼40–60 Myr and on scales larger than a few hundred parsecs. In all galaxies, the associations exhibit a global behavior that is distinct and more strongly correlated from compact clusters. Thus, populations of clusters are more evolved than associations in terms of their spatial distribution, traveling significantly from their birth site within a few tens of Myr, whereas associations show evidence of disruption occurring very quickly after their formation. The clustering of the stellar systems resembles that of a turbulent interstellar medium that drives the star formation process, correlating the components in unbound star-forming complexes in a hierarchical manner, dispersing shortly after formation, suggestive of a single, continuous mode of star formation across all galaxies.« less
NASA Astrophysics Data System (ADS)
Montillo, Albert; Song, Qi; Das, Bipul; Yin, Zhye
2015-03-01
Parsing volumetric computed tomography (CT) into 10 or more salient organs simultaneously is a challenging task with many applications such as personalized scan planning and dose reporting. In the clinic, pre-scan data can come in the form of very low dose volumes acquired just prior to the primary scan or from an existing primary scan. To localize organs in such diverse data, we propose a new learning based framework that we call hierarchical pictorial structures (HPS) which builds multiple levels of models in a tree-like hierarchy that mirrors the natural decomposition of human anatomy from gross structures to finer structures. Each node of our hierarchical model learns (1) the local appearance and shape of structures, and (2) a generative global model that learns probabilistic, structural arrangement. Our main contribution is twofold. First we embed the pictorial structures approach in a hierarchical framework which reduces test time image interpretation and allows for the incorporation of additional geometric constraints that robustly guide model fitting in the presence of noise. Second we guide our HPS framework with the probabilistic cost maps extracted using random decision forests using volumetric 3D HOG features which makes our model fast to train and fast to apply to novel test data and posses a high degree of invariance to shape distortion and imaging artifacts. All steps require approximate 3 mins to compute and all organs are located with suitably high accuracy for our clinical applications such as personalized scan planning for radiation dose reduction. We assess our method using a database of volumetric CT scans from 81 subjects with widely varying age and pathology and with simulated ultra-low dose cadaver pre-scan data.
Sun, Xiaoxia; Wang, Kunpeng; Shu, Yu; Zou, Fangdong; Zhang, Boxing; Sun, Guangwu; Uyama, Hiroshi; Wang, Xinhou
2017-01-01
In this study, novel photocatalyst monolith materials were successfully fabricated by a non-solvent induced phase separation (NIPS) technique. By adding a certain amount of ethyl acetate (as non-solvent) into a cellulose/LiCl/N,N-dimethylacetamide (DMAc) solution, and successively adding titanium dioxide (TiO2) nanoparticles (NPs), cellulose/TiO2 composite monoliths with hierarchically porous structures were easily formed. The obtained composite monoliths possessed mesopores, and two kinds of macropores. Scanning Electron Microscope (SEM), Energy Dispersive Spectroscopy (EDS), Fourier Transform Infrared Spectroscopy (FT-IR), X-ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), and Ultraviolet-visible Spectroscopy (UV-Vis) measurements were adopted to characterize the cellulose/TiO2 composite monolith. The cellulose/TiO2 composite monoliths showed high efficiency of photocatalytic activity in the decomposition of methylene blue dye, which was decomposed up to 99% within 60 min under UV light. Moreover, the composite monoliths could retain 90% of the photodegradation efficiency after 10 cycles. The novel NIPS technique has great potential for fabricating recyclable photocatalysts with highly efficiency. PMID:28772734
Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter
2016-09-21
In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach.
The Effects of City Streets on an Urban Disease Vector
Barbu, Corentin M.; Hong, Andrew; Manne, Jennifer M.; Small, Dylan S.; Quintanilla Calderón, Javier E.; Sethuraman, Karthik; Quispe-Machaca, Víctor; Ancca-Juárez, Jenny; Cornejo del Carpio, Juan G.; Málaga Chavez, Fernando S.; Náquira, César; Levy, Michael Z.
2013-01-01
With increasing urbanization vector-borne diseases are quickly developing in cities, and urban control strategies are needed. If streets are shown to be barriers to disease vectors, city blocks could be used as a convenient and relevant spatial unit of study and control. Unfortunately, existing spatial analysis tools do not allow for assessment of the impact of an urban grid on the presence of disease agents. Here, we first propose a method to test for the significance of the impact of streets on vector infestation based on a decomposition of Moran's spatial autocorrelation index; and second, develop a Gaussian Field Latent Class model to finely describe the effect of streets while controlling for cofactors and imperfect detection of vectors. We apply these methods to cross-sectional data of infestation by the Chagas disease vector Triatoma infestans in the city of Arequipa, Peru. Our Moran's decomposition test reveals that the distribution of T. infestans in this urban environment is significantly constrained by streets (p<0.05). With the Gaussian Field Latent Class model we confirm that streets provide a barrier against infestation and further show that greater than 90% of the spatial component of the probability of vector presence is explained by the correlation among houses within city blocks. The city block is thus likely to be an appropriate spatial unit to describe and control T. infestans in an urban context. Characteristics of the urban grid can influence the spatial dynamics of vector borne disease and should be considered when designing public health policies. PMID:23341756
Singh, Baneshwar; Minick, Kevan J.; Strickland, Michael S.; Wickings, Kyle G.; Crippen, Tawni L.; Tarone, Aaron M.; Benbow, M. Eric; Sufrin, Ness; Tomberlin, Jeffery K.; Pechal, Jennifer L.
2018-01-01
As vertebrate carrion decomposes, there is a release of nutrient-rich fluids into the underlying soil, which can impact associated biological community structure and function. How these changes alter soil biogeochemical cycles is relatively unknown and may prove useful in the identification of carrion decomposition islands that have long lasting, focal ecological effects. This study investigated the spatial (0, 1, and 5 m) and temporal (3–732 days) dynamics of human cadaver decomposition on soil bacterial and arthropod community structure and microbial function. We observed strong evidence of a predictable response to cadaver decomposition that varies over space for soil bacterial and arthropod community structure, carbon (C) mineralization and microbial substrate utilization patterns. In the presence of a cadaver (i.e., 0 m samples), the relative abundance of Bacteroidetes and Firmicutes was greater, while the relative abundance of Acidobacteria, Chloroflexi, Gemmatimonadetes, and Verrucomicrobia was lower when compared to samples at 1 and 5 m. Micro-arthropods were more abundant (15 to 17-fold) in soils collected at 0 m compared to either 1 or 5 m, but overall, micro-arthropod community composition was unrelated to either bacterial community composition or function. Bacterial community structure and microbial function also exhibited temporal relationships, whereas arthropod community structure did not. Cumulative precipitation was more effective in predicting temporal variations in bacterial abundance and microbial activity than accumulated degree days. In the presence of the cadaver (i.e., 0 m samples), the relative abundance of Actinobacteria increased significantly with cumulative precipitation. Furthermore, soil bacterial communities and C mineralization were sensitive to the introduction of human cadavers as they diverged from baseline levels and did not recover completely in approximately 2 years. These data are valuable for understanding ecosystem function surrounding carrion decomposition islands and can be applicable to environmental bio-monitoring and forensic sciences. PMID:29354106
Singh, Baneshwar; Minick, Kevan J; Strickland, Michael S; Wickings, Kyle G; Crippen, Tawni L; Tarone, Aaron M; Benbow, M Eric; Sufrin, Ness; Tomberlin, Jeffery K; Pechal, Jennifer L
2017-01-01
As vertebrate carrion decomposes, there is a release of nutrient-rich fluids into the underlying soil, which can impact associated biological community structure and function. How these changes alter soil biogeochemical cycles is relatively unknown and may prove useful in the identification of carrion decomposition islands that have long lasting, focal ecological effects. This study investigated the spatial (0, 1, and 5 m) and temporal (3-732 days) dynamics of human cadaver decomposition on soil bacterial and arthropod community structure and microbial function. We observed strong evidence of a predictable response to cadaver decomposition that varies over space for soil bacterial and arthropod community structure, carbon (C) mineralization and microbial substrate utilization patterns. In the presence of a cadaver (i.e., 0 m samples), the relative abundance of Bacteroidetes and Firmicutes was greater, while the relative abundance of Acidobacteria, Chloroflexi, Gemmatimonadetes, and Verrucomicrobia was lower when compared to samples at 1 and 5 m. Micro-arthropods were more abundant (15 to 17-fold) in soils collected at 0 m compared to either 1 or 5 m, but overall, micro-arthropod community composition was unrelated to either bacterial community composition or function. Bacterial community structure and microbial function also exhibited temporal relationships, whereas arthropod community structure did not. Cumulative precipitation was more effective in predicting temporal variations in bacterial abundance and microbial activity than accumulated degree days. In the presence of the cadaver (i.e., 0 m samples), the relative abundance of Actinobacteria increased significantly with cumulative precipitation. Furthermore, soil bacterial communities and C mineralization were sensitive to the introduction of human cadavers as they diverged from baseline levels and did not recover completely in approximately 2 years. These data are valuable for understanding ecosystem function surrounding carrion decomposition islands and can be applicable to environmental bio-monitoring and forensic sciences.
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
The neural basis of novelty and appropriateness in processing of creative chunk decomposition.
Huang, Furong; Fan, Jin; Luo, Jing
2015-06-01
Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. Copyright © 2015 Elsevier Inc. All rights reserved.
Jones, Leslie A.; Muhlfeld, Clint C.; Marshall, Lucy A.; McGlynn, Brian L.; Kershner, Jeffrey L.
2013-01-01
Understanding the vulnerability of aquatic species and habitats under climate change is critical for conservation and management of freshwater systems. Climate warming is predicted to increase water temperatures in freshwater ecosystems worldwide, yet few studies have developed spatially explicit modelling tools for understanding the potential impacts. We parameterized a nonspatial model, a spatial flow-routed model, and a spatial hierarchical model to predict August stream temperatures (22-m resolution) throughout the Flathead River Basin, USA and Canada. Model comparisons showed that the spatial models performed significantly better than the nonspatial model, explaining the spatial autocorrelation found between sites. The spatial hierarchical model explained 82% of the variation in summer mean (August) stream temperatures and was used to estimate thermal regimes for threatened bull trout (Salvelinus confluentus) habitats, one of the most thermally sensitive coldwater species in western North America. The model estimated summer thermal regimes of spawning and rearing habitats at <13 C° and foraging, migrating, and overwintering habitats at <14 C°. To illustrate the useful application of such a model, we simulated climate warming scenarios to quantify potential loss of critical habitats under forecasted climatic conditions. As air and water temperatures continue to increase, our model simulations show that lower portions of the Flathead River Basin drainage (foraging, migrating, and overwintering habitat) may become thermally unsuitable and headwater streams (spawning and rearing) may become isolated because of increasing thermal fragmentation during summer. Model results can be used to focus conservation and management efforts on populations of concern, by identifying critical habitats and assessing thermal changes at a local scale.
Schubert, Walter
2013-01-01
Understanding biological systems at the level of their relational (emergent) molecular properties in functional protein networks relies on imaging methods, able to spatially resolve a tissue or a cell as a giant, non-random, topologically defined collection of interacting supermolecules executing myriads of subcellular mechanisms. Here, the development and findings of parameter-unlimited functional super-resolution microscopy are described—a technology based on the fluorescence imaging cycler (IC) principle capable of co-mapping thousands of distinct biomolecular assemblies at high spatial resolution and differentiation (<40 nm distances). It is shown that the subcellular and transcellular features of such supermolecules can be described at the compositional and constitutional levels; that the spatial connection, relational stoichiometry, and topology of supermolecules generate hitherto unrecognized functional self-segmentation of biological tissues; that hierarchical features, common to thousands of simultaneously imaged supermolecules, can be identified; and how the resulting supramolecular order relates to spatial coding of cellular functionalities in biological systems. A large body of observations with IC molecular systems microscopy collected over 20 years have disclosed principles governed by a law of supramolecular segregation of cellular functionalities. This pervades phenomena, such as exceptional orderliness, functional selectivity, combinatorial and spatial periodicity, and hierarchical organization of large molecular systems, across all species investigated so far. This insight is based on the high degree of specificity, selectivity, and sensitivity of molecular recognition processes for fluorescence imaging beyond the spectral resolution limit, using probe libraries controlled by ICs. © 2013 The Authors. Journal of Molecular Recognition published by John Wiley & Sons, Ltd. PMID:24375580
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.
Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-11-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains
Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-01-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363
Representation of Muscle Synergies in the Primate Brain.
Overduin, Simon A; d'Avella, Andrea; Roh, Jinsook; Carmena, Jose M; Bizzi, Emilio
2015-09-16
Evidence suggests that the CNS uses motor primitives to simplify movement control, but whether it actually stores primitives instead of computing solutions on the fly to satisfy task demands is a controversial and still-unanswered possibility. Also in contention is whether these primitives take the form of time-invariant muscle coactivations ("spatial" synergies) or time-varying muscle commands ("spatiotemporal" synergies). Here, we examined forelimb muscle patterns and motor cortical spiking data in rhesus macaques (Macaca mulatta) handling objects of variable shape and size. From these data, we extracted both spatiotemporal and spatial synergies using non-negative decomposition. Each spatiotemporal synergy represents a sequence of muscular or neural activations that appeared to recur frequently during the animals' behavior. Key features of the spatiotemporal synergies (including their dimensionality, timing, and amplitude modulation) were independently observed in the muscular and neural data. In addition, both at the muscular and neural levels, these spatiotemporal synergies could be readily reconstructed as sequential activations of spatial synergies (a subset of those extracted independently from the task data), suggestive of a hierarchical relationship between the two levels of synergies. The possibility that motor cortex may execute even complex skill using spatiotemporal synergies has novel implications for the design of neuroprosthetic devices, which could gain computational efficiency by adopting the discrete and low-dimensional control that these primitives imply. We studied the motor cortical and forearm muscular activity of rhesus macaques (Macaca mulatta) as they reached, grasped, and carried objects of varied shape and size. We applied non-negative matrix factorization separately to the cortical and muscular data to reduce their dimensionality to a smaller set of time-varying "spatiotemporal" synergies. Each synergy represents a sequence of cortical or muscular activity that recurred frequently during the animals' behavior. Salient features of the synergies (including their dimensionality, timing, and amplitude modulation) were observed at both the cortical and muscular levels. The possibility that the brain may execute even complex behaviors using spatiotemporal synergies has implications for neuroprosthetic algorithm design, which could become more computationally efficient by adopting the discrete and low-dimensional control that they afford. Copyright © 2015 the authors 0270-6474/15/3512615-10$15.00/0.
Biologically-inspired robust and adaptive multi-sensor fusion and active control
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.
Spatial patterns in vegetation fires in the Indian region.
Vadrevu, Krishna Prasad; Badarinath, K V S; Anuradha, Eaturu
2008-12-01
In this study, we used fire count datasets derived from Along Track Scanning Radiometer (ATSR) satellite to characterize spatial patterns in fire occurrences across highly diverse geographical, vegetation and topographic gradients in the Indian region. For characterizing the spatial patterns of fire occurrences, observed fire point patterns were tested against the hypothesis of a complete spatial random (CSR) pattern using three different techniques, the quadrat analysis, nearest neighbor analysis and Ripley's K function. Hierarchical nearest neighboring technique was used to depict the 'hotspots' of fire incidents. Of the different states, highest fire counts were recorded in Madhya Pradesh (14.77%) followed by Gujarat (10.86%), Maharastra (9.92%), Mizoram (7.66%), Jharkhand (6.41%), etc. With respect to the vegetation categories, highest number of fires were recorded in agricultural regions (40.26%) followed by tropical moist deciduous vegetation (12.72), dry deciduous vegetation (11.40%), abandoned slash and burn secondary forests (9.04%), tropical montane forests (8.07%) followed by others. Analysis of fire counts based on elevation and slope range suggested that maximum number of fires occurred in low and medium elevation types and in very low to low-slope categories. Results from three different spatial techniques for spatial pattern suggested clustered pattern in fire events compared to CSR. Most importantly, results from Ripley's K statistic suggested that fire events are highly clustered at a lag-distance of 125 miles. Hierarchical nearest neighboring clustering technique identified significant clusters of fire 'hotspots' in different states in northeast and central India. The implications of these results in fire management and mitigation were discussed. Also, this study highlights the potential of spatial point pattern statistics in environmental monitoring and assessment studies with special reference to fire events in the Indian region.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
Reactive Goal Decomposition Hierarchies for On-Board Autonomy
NASA Astrophysics Data System (ADS)
Hartmann, L.
2002-01-01
As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.
Fleury, Guillaume; Steele, Julian A; Gerber, Iann C; Jolibois, F; Puech, P; Muraoka, Koki; Keoh, Sye Hoe; Chaikittisilp, Watcharop; Okubo, Tatsuya; Roeffaers, Maarten B J
2018-04-05
The direct synthesis of hierarchically intergrown silicalite-1 can be achieved using a specific diquaternary ammonium agent. However, the location of these molecules in the zeolite framework, which is critical to understand the formation of the material, remains unclear. Where traditional characterization tools have previously failed, herein we use polarized stimulated Raman scattering (SRS) microscopy to resolve molecular organization inside few-micron-sized crystals. Through a combination of experiment and first-principles calculations, our investigation reveals the preferential location of the templating agent inside the linear pores of the MFI framework. Besides illustrating the attractiveness of SRS microscopy in the field of material science to study and spatially resolve local molecular distribution as well as orientation, these results can be exploited in the design of new templating agents for the preparation of hierarchical zeolites.
NASA Astrophysics Data System (ADS)
Wang, Bin; Wang, Haojiang; Zhang, Fengwei; Sun, Tijian
2018-06-01
A facile and efficient strategy is presented for the encapsulation of Ag NPs within hierarchical porous silicalite-1. The physicochemical properties of the resultant catalyst are characterized by TEM, XRD, FTIR, and N2 adsorption-desorption analytical techniques. It turns out that the Ag NPs are well distributed in MFI zeolite framework, which possesses hierarchical porous characteristics (1.75, 3.96 nm), and the specific surface area is as high as 243 m2 · g-1. More importantly, such catalyst can rapidly transform the 4-nitrophenol to 4-aminophenol in aqueous solution at room temperature, and a quantitative conversion is also obtained after being reused 10 times. The reasons can be attributed to the fast mass transfer, large surface area, and spatial confinement effect of the advanced support.
Spatial and temporal characteristics of elevated temperatures in municipal solid waste landfills.
Jafari, Navid H; Stark, Timothy D; Thalhamer, Todd
2017-01-01
Elevated temperatures in waste containment facilities can pose health, environmental, and safety risks because they generate toxic gases, pressures, leachate, and heat. In particular, MSW landfills undergo changes in behavior that typically follow a progression of indicators, e.g., elevated temperatures, changes in gas composition, elevated gas pressures, increased leachate migration, slope movement, and unusual and rapid surface settlement. This paper presents two MSW landfill case studies that show the spatial and time-lapse movements of these indicators and identify four zones that illustrate the transition of normal MSW decomposition to the region of elevated temperatures. The spatial zones are gas front, temperature front, and smoldering front. The gas wellhead temperature and the ratio of CH 4 to CO 2 are used to delineate the boundaries between normal MSW decomposition, gas front, and temperature front. The ratio of CH 4 to CO 2 and carbon monoxide concentrations along with settlement strain rates and subsurface temperatures are used to delineate the smoldering front. In addition, downhole temperatures can be used to estimate the rate of movement of elevated temperatures, which is important for isolating and containing the elevated temperature in a timely manner. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial Distribution of Resonance in the Velocity Field for Transonic Flow over a Rectangular Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.
Pulse-burst particle image velocimetry (PIV) has been used to acquire time-resolved data at 37.5 kHz of the flow over a finite-width rectangular cavity at Mach 0.8. Power spectra of the PIV data reveal four resonance modes that match the frequencies detected simultaneously using high-frequency wall pressure sensors but whose magnitudes exhibit spatial dependence throughout the cavity. Spatio-temporal cross-correlations of velocity to pressure were calculated after bandpass filtering for specific resonance frequencies. Cross-correlation magnitudes express the distribution of resonance energy, revealing local maxima and minima at the edges of the shear layer attributable to wave interference between downstream- and upstream-propagating disturbances.more » Turbulence intensities were calculated using a triple decomposition and are greatest in the core of the shear layer for higher modes, where resonant energies ordinarily are lower. Most of the energy for the lowest mode lies in the recirculation region and results principally from turbulence rather than resonance. Together, the velocity-pressure cross-correlations and the triple-decomposition turbulence intensities explain the sources of energy identified in the spatial distributions of power spectra amplitudes.« less
Spatial Distribution of Resonance in the Velocity Field for Transonic Flow over a Rectangular Cavity
Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.; ...
2017-07-27
Pulse-burst particle image velocimetry (PIV) has been used to acquire time-resolved data at 37.5 kHz of the flow over a finite-width rectangular cavity at Mach 0.8. Power spectra of the PIV data reveal four resonance modes that match the frequencies detected simultaneously using high-frequency wall pressure sensors but whose magnitudes exhibit spatial dependence throughout the cavity. Spatio-temporal cross-correlations of velocity to pressure were calculated after bandpass filtering for specific resonance frequencies. Cross-correlation magnitudes express the distribution of resonance energy, revealing local maxima and minima at the edges of the shear layer attributable to wave interference between downstream- and upstream-propagating disturbances.more » Turbulence intensities were calculated using a triple decomposition and are greatest in the core of the shear layer for higher modes, where resonant energies ordinarily are lower. Most of the energy for the lowest mode lies in the recirculation region and results principally from turbulence rather than resonance. Together, the velocity-pressure cross-correlations and the triple-decomposition turbulence intensities explain the sources of energy identified in the spatial distributions of power spectra amplitudes.« less
Strecker, Angela L; Casselman, John M; Fortin, Marie-Josée; Jackson, Donald A; Ridgway, Mark S; Abrams, Peter A; Shuter, Brian J
2011-07-01
Species present in communities are affected by the prevailing environmental conditions, and the traits that these species display may be sensitive indicators of community responses to environmental change. However, interpretation of community responses may be confounded by environmental variation at different spatial scales. Using a hierarchical approach, we assessed the spatial and temporal variation of traits in coastal fish communities in Lake Huron over a 5-year time period (2001-2005) in response to biotic and abiotic environmental factors. The association of environmental and spatial variables with trophic, life-history, and thermal traits at two spatial scales (regional basin-scale, local site-scale) was quantified using multivariate statistics and variation partitioning. We defined these two scales (regional, local) on which to measure variation and then applied this measurement framework identically in all 5 study years. With this framework, we found that there was no change in the spatial scales of fish community traits over the course of the study, although there were small inter-annual shifts in the importance of regional basin- and local site-scale variables in determining community trait composition (e.g., life-history, trophic, and thermal). The overriding effects of regional-scale variables may be related to inter-annual variation in average summer temperature. Additionally, drivers of fish community traits were highly variable among study years, with some years dominated by environmental variation and others dominated by spatially structured variation. The influence of spatial factors on trait composition was dynamic, which suggests that spatial patterns in fish communities over large landscapes are transient. Air temperature and vegetation were significant variables in most years, underscoring the importance of future climate change and shoreline development as drivers of fish community structure. Overall, a trait-based hierarchical framework may be a useful conservation tool, as it highlights the multi-scaled interactive effect of variables over a large landscape.
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
NASA Astrophysics Data System (ADS)
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-07-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening.
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-01-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening. PMID:27435638
Pochikalov, A V; Karelin, D V
2014-01-01
Although many recently published original papers and reviews deal with plant matter decomposition rates and their controls, we are still very short in understanding of these processes in boreal and high latiude plant communities, especially in permafrost areas of our planet. First and foremost, this is holds true for winter period. Here, we present the results of 2-year field observations in south taiga and south shrub tundra ecosystems in European Russia. We pioneered in simultaneous application of two independent methods: classic mass loss estimation by litter-bag technique, and direct measurement of CO2 emission (respiration) of the same litter bags with different types of dead plant matter. Such an approach let us to reconstruct intra-seasonal dynamics of decomposition rates of the main tundra litter fractions with high temporal resolution, to estimate the partial role of different seasons and defragmentation in the process of plant matter decomposition, and to determine its factors under different temporal scale.
Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi
2017-03-01
Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Inundation, vegetation, and sediment effects on litter decomposition in Pacific Coast tidal marshes
Janousek, Christopher; Buffington, Kevin J.; Guntenspergen, Glenn R.; Thorne, Karen M.; Dugger, Bruce D.; Takekawa, John Y.
2017-01-01
The cycling and sequestration of carbon are important ecosystem functions of estuarine wetlands that may be affected by climate change. We conducted experiments across a latitudinal and climate gradient of tidal marshes in the northeast Pacific to evaluate the effects of climate- and vegetation-related factors on litter decomposition. We manipulated tidal exposure and litter type in experimental mesocosms at two sites and used variation across marsh landscapes at seven sites to test for relationships between decomposition and marsh elevation, soil temperature, vegetation composition, litter quality, and sediment organic content. A greater than tenfold increase in manipulated tidal inundation resulted in small increases in decomposition of roots and rhizomes of two species, but no significant change in decay rates of shoots of three other species. In contrast, across the latitudinal gradient, decomposition rates of Salicornia pacifica litter were greater in high marsh than in low marsh. Rates were not correlated with sediment temperature or organic content, but were associated with plant assemblage structure including above-ground cover, species composition, and species richness. Decomposition rates also varied by litter type; at two sites in the Pacific Northwest, the grasses Deschampsia cespitosa and Distichlis spicata decomposed more slowly than the forb S. pacifica. Our data suggest that elevation gradients and vegetation structure in tidal marshes both affect rates of litter decay, potentially leading to complex spatial patterns in sediment carbon dynamics. Climate change may thus have direct effects on rates of decomposition through increased inundation from sea-level rise and indirect effects through changing plant community composition.
Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence
NASA Astrophysics Data System (ADS)
Hatch, David R.
This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.
MO-FG-204-01: Improved Noise Suppression for Dual-Energy CT Through Entropy Minimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, M; Zhu, L
2015-06-15
Purpose: In dual energy CT (DECT), noise amplification during signal decomposition significantly limits the utility of basis material images. Since clinically relevant objects contain a limited number of materials, we propose to suppress noise for DECT based on image entropy minimization. An adaptive weighting scheme is employed during noise suppression to improve decomposition accuracy with limited effect on spatial resolution and image texture preservation. Methods: From decomposed images, we first generate a 2D plot of scattered data points, using basis material densities as coordinates. Data points representing the same material generate a highly asymmetric cluster. We orient an axis bymore » minimizing the entropy in a 1D histogram of these points projected onto the axis. To suppress noise, we replace pixel values of decomposed images with center-of-mass values in the direction perpendicular to the optimal axis. To limit errors due to cluster overlap, we weight each data point’s contribution based on its high and low energy CT values and location within the image. The proposed method’s performance is assessed on physical phantom studies. Electron density is used as the quality metric for decomposition accuracy. Our results are compared to those without noise suppression and with a recently developed iterative method. Results: The proposed method reduces noise standard deviations of the decomposed images by at least one order of magnitude. On the Catphan phantom, this method greatly preserves the spatial resolution and texture of the CT images and limits induced error in measured electron density to below 1.2%. In the head phantom study, the proposed method performs the best in retaining fine, intricate structures. Conclusion: The entropy minimization based algorithm with adaptive weighting substantially reduces DECT noise while preserving image spatial resolution and texture. Future investigations will include extensive investigations on material decomposition accuracy that go beyond the current electron density calculations. This work was supported in part by the National Institutes of Health (NIH) under Grant Number R21 EB012700.« less
Andrew Birt
2011-01-01
The population dynamics of the southern pine beetle (SPB) exhibit characteristic fluctuations between relatively long endemic and shorter outbreak periods. Populations exhibit complex and hierarchical spatial structure with beetles and larvae aggregating within individual trees, infestations with multiple infested trees, and regional outbreaks that comprise a large...
The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework
NASA Astrophysics Data System (ADS)
Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.
2016-12-01
The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During the 6th Session of the UN-GGIM in August 2016 the role of DGGS in the context of the GSGF was formally acknowledged. This paper proposes to highlight the synergies and role of DGGS in the Global Statistical Geospatial Framework and to show examples of the use of DGGS to combine geospatial statistics with traditional geoscientific data.
SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)
NASA Astrophysics Data System (ADS)
Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin
2017-02-01
With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z < 0.14 and coverage of at least 1.5 effective radii for a spatial resolution of 2.5 arcsec full width at half-maximum and field of view of > 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
Visual target modulation of functional connectivity networks revealed by self-organizing group ICA.
van de Ven, Vincent; Bledowski, Christoph; Prvulovic, David; Goebel, Rainer; Formisano, Elia; Di Salle, Francesco; Linden, David E J; Esposito, Fabrizio
2008-12-01
We applied a data-driven analysis based on self-organizing group independent component analysis (sogICA) to fMRI data from a three-stimulus visual oddball task. SogICA is particularly suited to the investigation of the underlying functional connectivity and does not rely on a predefined model of the experiment, which overcomes some of the limitations of hypothesis-driven analysis. Unlike most previous applications of ICA in functional imaging, our approach allows the analysis of the data at the group level, which is of particular interest in high order cognitive studies. SogICA is based on the hierarchical clustering of spatially similar independent components, derived from single subject decompositions. We identified four main clusters of components, centered on the posterior cingulate, bilateral insula, bilateral prefrontal cortex, and right posterior parietal and prefrontal cortex, consistently across all participants. Post hoc comparison of time courses revealed that insula, prefrontal cortex and right fronto-parietal components showed higher activity for targets than for distractors. Activation for distractors was higher in the posterior cingulate cortex, where deactivation was observed for targets. While our results conform to previous neuroimaging studies, they also complement conventional results by showing functional connectivity networks with unique contributions to the task that were consistent across subjects. SogICA can thus be used to probe functional networks of active cognitive tasks at the group-level and can provide additional insights to generate new hypotheses for further study. Copyright 2007 Wiley-Liss, Inc.
Metascalable molecular dynamics simulation of nano-mechano-chemistry
NASA Astrophysics Data System (ADS)
Shimojo, F.; Kalia, R. K.; Nakano, A.; Nomura, K.; Vashishta, P.
2008-07-01
We have developed a metascalable (or 'design once, scale on new architectures') parallel application-development framework for first-principles based simulations of nano-mechano-chemical processes on emerging petaflops architectures based on spatiotemporal data locality principles. The framework consists of (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms, (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these scalable algorithms onto hardware. The EDC-STEP-HCD framework exposes and expresses maximal concurrency and data locality, thereby achieving parallel efficiency as high as 0.99 for 1.59-billion-atom reactive force field molecular dynamics (MD) and 17.7-million-atom (1.56 trillion electronic degrees of freedom) quantum mechanical (QM) MD in the framework of the density functional theory (DFT) on adaptive multigrids, in addition to 201-billion-atom nonreactive MD, on 196 608 IBM BlueGene/L processors. We have also used the framework for automated execution of adaptive hybrid DFT/MD simulation on a grid of six supercomputers in the US and Japan, in which the number of processors changed dynamically on demand and tasks were migrated according to unexpected faults. The paper presents the application of the framework to the study of nanoenergetic materials: (1) combustion of an Al/Fe2O3 thermite and (2) shock initiation and reactive nanojets at a void in an energetic crystal.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
Feng, Liang; Yuan, Shuai; Zhang, Liang-Liang; Tan, Kui; Li, Jia-Luo; Kirchon, Angelo; Liu, Ling-Mei; Zhang, Peng; Han, Yu; Chabal, Yves J; Zhou, Hong-Cai
2018-02-14
Sufficient pore size, appropriate stability, and hierarchical porosity are three prerequisites for open frameworks designed for drug delivery, enzyme immobilization, and catalysis involving large molecules. Herein, we report a powerful and general strategy, linker thermolysis, to construct ultrastable hierarchically porous metal-organic frameworks (HP-MOFs) with tunable pore size distribution. Linker instability, usually an undesirable trait of MOFs, was exploited to create mesopores by generating crystal defects throughout a microporous MOF crystal via thermolysis. The crystallinity and stability of HP-MOFs remain after thermolabile linkers are selectively removed from multivariate metal-organic frameworks (MTV-MOFs) through a decarboxylation process. A domain-based linker spatial distribution was found to be critical for creating hierarchical pores inside MTV-MOFs. Furthermore, linker thermolysis promotes the formation of ultrasmall metal oxide nanoparticles immobilized in an open framework that exhibits high catalytic activity for Lewis acid-catalyzed reactions. Most importantly, this work provides fresh insights into the connection between linker apportionment and vacancy distribution, which may shed light on probing the disordered linker apportionment in multivariate systems, a long-standing challenge in the study of MTV-MOFs.
Hierarchical models of animal abundance and occurrence
Royle, J. Andrew; Dorazio, R.M.
2006-01-01
Much of animal ecology is devoted to studies of abundance and occurrence of species, based on surveys of spatially referenced sample units. These surveys frequently yield sparse counts that are contaminated by imperfect detection, making direct inference about abundance or occurrence based on observational data infeasible. This article describes a flexible hierarchical modeling framework for estimation and inference about animal abundance and occurrence from survey data that are subject to imperfect detection. Within this framework, we specify models of abundance and detectability of animals at the level of the local populations defined by the sample units. Information at the level of the local population is aggregated by specifying models that describe variation in abundance and detection among sites. We describe likelihood-based and Bayesian methods for estimation and inference under the resulting hierarchical model. We provide two examples of the application of hierarchical models to animal survey data, the first based on removal counts of stream fish and the second based on avian quadrat counts. For both examples, we provide a Bayesian analysis of the models using the software WinBUGS.
On hierarchical solutions to the BBGKY hierarchy
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1988-01-01
It is thought that the gravitational clustering of galaxies in the universe may approach a scale-invariant, hierarchical form in the small separation, large-clustering regime. Past attempts to solve the Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY) hierarchy in this regime have assumed a certain separable hierarchical form for the higher order correlation functions of galaxies in phase space. It is shown here that such separable solutions to the BBGKY equations must satisfy the condition that the clustered component of the solution has cluster-cluster correlations equal to galaxy-galaxy correlations to all orders. The solutions also admit the presence of an arbitrary unclustered component, which plays no dyamical role in the large-clustering regime. These results are a particular property of the specific separable model assumed for the correlation functions in phase space, not an intrinsic property of spatially hierarchical solutions to the BBGKY hierarchy. The observed distribution of galaxies does not satisfy the required conditions. The disagreement between theory and observation may be traced, at least in part, to initial conditions which, if Gaussian, already have cluster correlations greater than galaxy correlations.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Fiber optic sensing technology for detecting gas hydrate formation and decomposition.
Rawn, C J; Leeman, J R; Ulrich, S M; Alford, J E; Phelps, T J; Madden, M E
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH(4)-H(2)O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Fiber optic sensing technology for detecting gas hydrate formation and decomposition
NASA Astrophysics Data System (ADS)
Rawn, C. J.; Leeman, J. R.; Ulrich, S. M.; Alford, J. E.; Phelps, T. J.; Madden, M. E.
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH4-H2O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Separation of distinct photoexcitation species in femtosecond transient absorption microscopy
Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane; ...
2016-02-03
Femtosecond transient absorption microscopy is a novel chemical imaging capability with simultaneous high spatial and temporal resolution. Although several powerful data analysis approaches have been developed and successfully applied to separate distinct chemical species in such images, the application of such analysis to distinguish different photoexcited species is rare. In this paper, we demonstrate a combined approach based on phasor and linear decomposition analysis on a microscopic level that allows us to separate the contributions of both the excitons and free charge carriers in the observed transient absorption response of a composite organometallic lead halide perovskite film. We found spatialmore » regions where the transient absorption response was predominately a result of excitons and others where it was predominately due to charge carriers, and regions consisting of signals from both contributors. Lastly, quantitative decomposition of the transient absorption response curves further enabled us to reveal the relative contribution of each photoexcitation to the measured response at spatially resolved locations in the film.« less
Spatial Variability in Decomposition of Organic Carbon Along a Meandering River Floodplain
NASA Astrophysics Data System (ADS)
Sutfin, N. A.; Rowland, J. C.; Tfaily, M. M.; Bingol, A. K.; Washton, N.
2017-12-01
Rivers are an important component of the terrestrial carbon cycle and floodplains can provide significant storage of organic carbon. Quantification of long-term storage, however, requires determination of the residence time of sediment and the decomposition rate of organic carbon in floodplains. We use fourier transform ion cyclotron resonance (FTICR) mass spectrometry to examine the organic carbon compounds present in sediment within three floodplain settings: point bars, cutbanks, and abandoned channels. We define decomposition of organic carbon in floodplain sediment as the ratio between the number of protein versus lignin, which serve as proxies for microbial-derived and terrestrial-derived organic carbon, respectively. Samples were collected at 0-5 cm, 5-15cm, and 15-30 cm depth along four transects that span a longitudinal valley distance of 8 km on the East River near Crested Butte, CO. Although no significant trends in decomposition ratio exist longitudinally between the fours transects, floodplain settings exhibit significant differences. At shallow depths (0-5 cm), there are no significant differences among settings, with the exception of gravel portions of point bars below bankfull flow, where the highest decomposition is present. Conversely, cutbanks contain significantly lower decomposition ratios compared with point bars, gravel bars, and abandoned channels when considering all depth intervals. Pointbars exhibit significantly greater protein vs. lignin at the surface compared to greater depth. Higher decomposition ratios along abandoned channels and point bars suggest that frequent wetting and drying periods, abundant oxygen, and continuous downstream movement and decomposition of organic matter occurs within the channel. Lower decomposition ratios and consistent trends with depth along cutbanks, suggest that these stable surfaces serve as organic carbon reservoirs that could become an increased source of carbon to the channel with increasing bank erosion. Detailed differences of organic carbon compounds in sediments of cutbanks, point bars, and abandoned channel will be examined in September 2017 using nuclear magnetic resonance (NMR).
A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies
Tang, Li
2014-01-01
Summary An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this paper, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, e.g., subjects with mental disorders or neurodegenerative diseases such as Parkinson’s as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation. PMID:24033125
Zhou, Rong; Basile, Franco
2017-09-05
A method based on plasmon surface resonance absorption and heating was developed to perform a rapid on-surface protein thermal decomposition and digestion suitable for imaging mass spectrometry (MS) and/or profiling. This photothermal process or plasmonic thermal decomposition/digestion (plasmonic-TDD) method incorporates a continuous wave (CW) laser excitation and gold nanoparticles (Au-NPs) to induce known thermal decomposition reactions that cleave peptides and proteins specifically at the C-terminus of aspartic acid and at the N-terminus of cysteine. These thermal decomposition reactions are induced by heating a solid protein sample to temperatures between 200 and 270 °C for a short period of time (10-50 s per 200 μm segment) and are reagentless and solventless, and thus are devoid of sample product delocalization. In the plasmonic-TDD setup the sample is coated with Au-NPs and irradiated with 532 nm laser radiation to induce thermoplasmonic heating and bring about site-specific thermal decomposition on solid peptide/protein samples. In this manner the Au-NPs act as nanoheaters that result in a highly localized thermal decomposition and digestion of the protein sample that is independent of the absorption properties of the protein, making the method universally applicable to all types of proteinaceous samples (e.g., tissues or protein arrays). Several experimental variables were optimized to maximize product yield, and they include heating time, laser intensity, size of Au-NPs, and surface coverage of Au-NPs. Using optimized parameters, proof-of-principle experiments confirmed the ability of the plasmonic-TDD method to induce both C-cleavage and D-cleavage on several peptide standards and the protein lysozyme by detecting their thermal decomposition products with matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). The high spatial specificity of the plasmonic-TDD method was demonstrated by using a mask to digest designated sections of the sample surface with the heating laser and MALDI-MS imaging to map the resulting products. The solventless nature of the plasmonic-TDD method enabled the nonenzymatic on-surface digestion of proteins to proceed with undetectable delocalization of the resulting products from their precursor protein location. The advantages of this novel plasmonic-TDD method include short reaction times (<30 s/200 μm), compatibility with MALDI, universal sample compatibility, high spatial specificity, and localization of the digestion products. These advantages point to potential applications of this method for on-tissue protein digestion and MS-imaging/profiling for the identification of proteins, high-fidelity MS imaging of high molecular weight (>30 kDa) proteins, and the rapid analysis of formalin-fixed paraffin-embedded (FFPE) tissue samples.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
NASA Technical Reports Server (NTRS)
Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.
2014-01-01
Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-angle Imaging Spectroradiomater (MISR) provide regular aerosol observations with global coverage. It is essential to examine the coherency between space- and ground-measured aerosol parameters in representing aerosol spatial and temporal variability, especially in the climate forcing and model validation context. In this paper, we introduce Maximum Covariance Analysis (MCA), also known as Singular Value Decomposition analysis as an effective way to compare correlated aerosol spatial and temporal patterns between satellite measurements and AERONET data. This technique not only successfully extracts the variability of major aerosol regimes but also allows the simultaneous examination of the aerosol variability both spatially and temporally. More importantly, it well accommodates the sparsely distributed AERONET data, for which other spectral decomposition methods, such as Principal Component Analysis, do not yield satisfactory results. The comparison shows overall good agreement between MODIS/MISR and AERONET AOD variability. The correlations between the first three modes of MCA results for both MODIS/AERONET and MISR/ AERONET are above 0.8 for the full data set and above 0.75 for the AOD anomaly data. The correlations between MODIS and MISR modes are also quite high (greater than 0.9). We also examine the extent of spatial agreement between satellite and AERONET AOD data at the selected stations. Some sites with disagreements in the MCA results, such as Kanpur, also have low spatial coherency. This should be associated partly with high AOD spatial variability and partly with uncertainties in satellite retrievals due to the seasonally varying aerosol types and surface properties.
2009-01-01
Background The isotopic composition of generalist consumers may be expected to vary in space as a consequence of spatial heterogeneity in isotope ratios, the abundance of resources, and competition. We aim to account for the spatial variation in the carbon and nitrogen isotopic composition of a generalized predatory species across a 500 ha. tropical rain forest landscape. We test competing models to account for relative influence of resources and competitors to the carbon and nitrogen isotopic enrichment of gypsy ants (Aphaenogaster araneoides), taking into account site-specific differences in baseline isotope ratios. Results We found that 75% of the variance in the fraction of 15N in the tissue of A. araneoides was accounted by one environmental parameter, the concentration of soil phosphorus. After taking into account landscape-scale variation in baseline resources, the most parsimonious model indicated that colony growth and leaf litter biomass accounted for nearly all of the variance in the δ15N discrimination factor, whereas the δ13C discrimination factor was most parsimoniously associated with colony size and the rate of leaf litter decomposition. There was no indication that competitor density or diversity accounted for spatial differences in the isotopic composition of gypsy ants. Conclusion Across a 500 ha. landscape, soil phosphorus accounted for spatial variation in baseline nitrogen isotope ratios. The δ15N discrimination factor of a higher order consumer in this food web was structured by bottom-up influences - the quantity and decomposition rate of leaf litter. Stable isotope studies on the trophic biology of consumers may benefit from explicit spatial design to account for edaphic properties that alter the baseline at fine spatial grains. PMID:19930701
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2018-03-01
A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.
Within outlying mean indexes: refining the OMI analysis for the realized niche decomposition.
Karasiewicz, Stéphane; Dolédec, Sylvain; Lefebvre, Sébastien
2017-01-01
The ecological niche concept has regained interest under environmental change (e.g., climate change, eutrophication, and habitat destruction), especially to study the impacts on niche shift and conservatism. Here, we propose the within outlying mean indexes (WitOMI), which refine the outlying mean index (OMI) analysis by using its properties in combination with the K -select analysis species marginality decomposition. The purpose is to decompose the ecological niche into subniches associated with the experimental design, i.e., taking into account temporal and/or spatial subsets. WitOMI emphasize the habitat conditions that contribute (1) to the definition of species' niches using all available conditions and, at the same time, (2) to the delineation of species' subniches according to given subsets of dates or sites. The latter aspect allows addressing niche dynamics by highlighting the influence of atypical habitat conditions on species at a given time and/or space. Then, (3) the biological constraint exerted on the species subniche becomes observable within Euclidean space as the difference between the existing fundamental subniche and the realized subniche. We illustrate the decomposition of published OMI analyses, using spatial and temporal examples. The species assemblage's subniches are comparable to the same environmental gradient, producing a more accurate and precise description of the assemblage niche distribution under environmental change. The WitOMI calculations are available in the open-access R package "subniche."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pau, G. S. H.; Bisht, G.; Riley, W. J.
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-17
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
NASA Astrophysics Data System (ADS)
Wendler, Th.; Meyer-Ebrecht, D.
1982-01-01
Picture archiving and communication systems, especially those for medical applications, will offer the potential to integrate the various image sources of different nature. A major problem, however, is the incompatibility of the different matrix sizes and data formats. This may be overcome by a novel hierarchical coding process, which could lead to a unified picture format standard. A picture coding scheme is described, which decomposites a given (2n)2 picture matrix into a basic (2m)2 coarse information matrix (representing lower spatial frequencies) and a set of n-m detail matrices, containing information of increasing spatial resolution. Thus, the picture is described by an ordered set of data blocks rather than by a full resolution matrix of pixels. The blocks of data are transferred and stored using data formats, which have to be standardized throughout the system. Picture sources, which produce pictures of different resolution, will provide the coarse-matrix datablock and additionally only those detail matrices that correspond to their required resolution. Correspondingly, only those detail-matrix blocks need to be retrieved from the picture base, that are actually required for softcopy or hardcopy output. Thus, picture sources and retrieval terminals of diverse nature and retrieval processes for diverse purposes are easily made compatible. Furthermore this approach will yield an economic use of storage space and transmission capacity: In contrast to fixed formats, redundand data blocks are always skipped. The user will get a coarse representation even of a high-resolution picture almost instantaneously with gradually added details, and may abort transmission at any desired detail level. The coding scheme applies the S-transform, which is a simple add/substract algorithm basically derived from the Hadamard Transform. Thus, an additional data compression can easily be achieved especially for high-resolution pictures by applying appropriate non-linear and/or adaptive quantizing.
Modeling stream network-scale variation in coho salmon overwinter survival and smolt size
We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over ...
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Tan, Wui Siew; Lewis, Christina L; Horelik, Nicholas E; Pregibon, Daniel C; Doyle, Patrick S; Yi, Hyunmin
2008-11-04
We demonstrate hierarchical assembly of tobacco mosaic virus (TMV)-based nanotemplates with hydrogel-based encoded microparticles via nucleic acid hybridization. TMV nanotemplates possess a highly defined structure and a genetically engineered high density thiol functionality. The encoded microparticles are produced in a high throughput microfluidic device via stop-flow lithography (SFL) and consist of spatially discrete regions containing encoded identity information, an internal control, and capture DNAs. For the hybridization-based assembly, partially disassembled TMVs were programmed with linker DNAs that contain sequences complementary to both the virus 5' end and a selected capture DNA. Fluorescence microscopy, atomic force microscopy (AFM), and confocal microscopy results clearly indicate facile assembly of TMV nanotemplates onto microparticles with high spatial and sequence selectivity. We anticipate that our hybridization-based assembly strategy could be employed to create multifunctional viral-synthetic hybrid materials in a rapid and high-throughput manner. Additionally, we believe that these viral-synthetic hybrid microparticles may find broad applications in high capacity, multiplexed target sensing.
Vangestel, C; Mergeay, J; Dawson, D A; Callens, T; Vandomme, V; Lens, L
2012-01-01
House sparrow (Passer domesticus) populations have suffered major declines in urban as well as rural areas, while remaining relatively stable in suburban ones. Yet, to date no exhaustive attempt has been made to examine how, and to what extent, spatial variation in population demography is reflected in genetic population structuring along contemporary urbanization gradients. Here we use putatively neutral microsatellite loci to study if and how genetic variation can be partitioned in a hierarchical way among different urbanization classes. Principal coordinate analyses did not support the hypothesis that urban/suburban and rural populations comprise two distinct genetic clusters. Comparison of FST values at different hierarchical scales revealed drift as an important force of population differentiation. Redundancy analyses revealed that genetic structure was strongly affected by both spatial variation and level of urbanization. The results shown here can be used as baseline information for future genetic monitoring programmes and provide additional insights into contemporary house sparrow dynamics along urbanization gradients. PMID:22588131
Vangestel, C; Mergeay, J; Dawson, D A; Callens, T; Vandomme, V; Lens, L
2012-09-01
House sparrow (Passer domesticus) populations have suffered major declines in urban as well as rural areas, while remaining relatively stable in suburban ones. Yet, to date no exhaustive attempt has been made to examine how, and to what extent, spatial variation in population demography is reflected in genetic population structuring along contemporary urbanization gradients. Here we use putatively neutral microsatellite loci to study if and how genetic variation can be partitioned in a hierarchical way among different urbanization classes. Principal coordinate analyses did not support the hypothesis that urban/suburban and rural populations comprise two distinct genetic clusters. Comparison of FST values at different hierarchical scales revealed drift as an important force of population differentiation. Redundancy analyses revealed that genetic structure was strongly affected by both spatial variation and level of urbanization. The results shown here can be used as baseline information for future genetic monitoring programmes and provide additional insights into contemporary house sparrow dynamics along urbanization gradients.
Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos
2017-07-06
We implemented a spatial model for analysing PM 10 maxima across the Mexico City metropolitan area during the period 1995-2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM 10 maxima in space and time. We evaluated the statistical model's performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM 10 maxima and the longitude and latitude. The relationship between time and the PM 10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM 10 maxima presenting levels above 1000 μ g/m 3 (return period: 25 yr) was observed in the northwestern region of the study area.
Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos
2017-01-01
We implemented a spatial model for analysing PM10 maxima across the Mexico City metropolitan area during the period 1995–2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM10 maxima in space and time. We evaluated the statistical model’s performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM10 maxima and the longitude and latitude. The relationship between time and the PM10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM10 maxima presenting levels above 1000 μg/m3 (return period: 25 yr) was observed in the northwestern region of the study area. PMID:28684720
Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules
2015-01-01
The development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution is presented. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are oriented parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions. PMID:25363506
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance
Fiske, Ian J.; Chandler, Richard B.
2011-01-01
Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientific questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mechanisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unified modeling interface. The R package unmarked provides such a unified modeling framework, including tools for data exploration, model fitting, model criticism, post-hoc analysis, and model comparison.
Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules
Tran, Helen; Ronaldson, Kacey; Bailey, Nevette A.; ...
2014-11-04
We present the development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are orientedmore » parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions.« less
NASA Astrophysics Data System (ADS)
Hudson, Zachary M.; Boott, Charlotte E.; Robinson, Matthew E.; Rupar, Paul A.; Winnik, Mitchell A.; Manners, Ian
2014-10-01
Recent advances in the self-assembly of block copolymers have enabled the precise fabrication of hierarchical nanostructures using low-cost solution-phase protocols. However, the preparation of well-defined and complex planar nanostructures in which the size is controlled in two dimensions (2D) has remained a challenge. Using a series of platelet-forming block copolymers, we have demonstrated through quantitative experiments that the living crystallization-driven self-assembly (CDSA) approach can be extended to growth in 2D. We used 2D CDSA to prepare uniform lenticular platelet micelles of controlled size and to construct precisely concentric lenticular micelles composed of spatially distinct functional regions, as well as complex structures analogous to nanoscale single- and double-headed arrows and spears. These methods represent a route to hierarchical nanostructures that can be tailored in 2D, with potential applications as diverse as liquid crystals, diagnostic technology and composite reinforcement.
Soil Moisture fusion across scales using a multiscale nonstationary Spatial Hierarchical Model
NASA Astrophysics Data System (ADS)
Kathuria, D.; Mohanty, B.; Katzfuss, M.
2017-12-01
Soil moisture (SM) datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors, on the other hand, provide observations on a finer spatial scale (meter scale or less) but are sparsely available. SM is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables and these interactions change dynamically with footprint scales. Past literature has largely focused on the scale specific effect of these covariates on soil moisture. The present study proposes a robust Multiscale-Nonstationary Spatial Hierarchical Model (MN-SHM) which can assimilate SM from point to RS footprints. The spatial structure of SM across footprints is modeled by a class of scalable covariance functions whose nonstationary depends on atmospheric forcings (such as precipitation) and surface physical controls (such as topography, soil-texture and vegetation). The proposed model is applied to fuse point and airborne ( 1.5 km) SM data obtained during the SMAPVEX12 campaign in the Red River watershed in Southern Manitoba, Canada with SMOS ( 30km) data. It is observed that precipitation, soil-texture and vegetation are the dominant factors which affect the SM distribution across various footprint scales (750 m, 1.5 km, 3 km, 9 km,15 km and 30 km). We conclude that MN-SHM handles the change of support problems easily while retaining reasonable predictive accuracy across multiple spatial resolutions in the presence of surface heterogeneity. The MN-SHM can be considered as a complex non-stationary extension of traditional geostatistical prediction methods (such as Kriging) for fusing multi-platform multi-scale datasets.
De Lillo, Carlo; Kirby, Melissa; Poole, Daniel
2016-01-01
Immediate serial spatial recall measures the ability to retain sequences of locations in short-term memory and is considered the spatial equivalent of digit span. It is tested by requiring participants to reproduce sequences of movements performed by an experimenter or displayed on a monitor. Different organizational factors dramatically affect serial spatial recall but they are often confounded or underspecified. Untangling them is crucial for the characterization of working-memory models and for establishing the contribution of structure and memory capacity to spatial span. We report five experiments assessing the relative role and independence of factors that have been reported in the literature. Experiment 1 disentangled the effects of spatial clustering and path-length by manipulating the distance of items displayed on a touchscreen monitor. Long-path sequences segregated by spatial clusters were compared with short-path sequences not segregated by clusters. Recall was more accurate for sequences segregated by clusters independently from path-length. Experiment 2 featured conditions where temporal pauses were introduced between or within cluster boundaries during the presentation of sequences with the same paths. Thus, the temporal structure of the sequences was either consistent or inconsistent with a hierarchical representation based on segmentation by spatial clusters but the effect of structure could not be confounded with effects of path-characteristics. Pauses at cluster boundaries yielded more accurate recall, as predicted by a hierarchical model. In Experiment 3, the systematic manipulation of sequence structure, path-length, and presence of path-crossings of sequences showed that structure explained most of the variance, followed by the presence/absence of path-crossings, and path-length. Experiments 4 and 5 replicated the results of the previous experiments in immersive virtual reality navigation tasks where the viewpoint of the observer changed dynamically during encoding and recall. This suggested that the effects of structure in spatial span are not dependent on perceptual grouping processes induced by the aerial view of the stimulus array typically afforded by spatial recall tasks. These results demonstrate the independence of coding strategies based on structure from effects of path characteristics and perceptual grouping in immediate serial spatial recall. PMID:27891101
Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.
Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J
2014-11-19
The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.
NASA Astrophysics Data System (ADS)
Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire
2017-02-01
In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.
Kim, Jeong-Im; Humphreys, Glyn W
2010-08-01
Previous research has shown that stimuli held in working memory (WM) can influence spatial attention. Using Navon stimuli, we explored whether and how items in WM affect the perception of visual targets at local and global levels in compound letters. Participants looked for a target letter presented at a local or global level while holding a regular block letter as a memory item. An effect of holding the target's identity in WM was found. When memory items and targets were the same, performance was better than in a neutral condition when the memory item did not appear in the hierarchical letter (a benefit from valid cuing). When the memory item matched the distractor in the hierarchical stimulus, performance was worse than in the neutral baseline (a cost on invalid trials). These effects were greatest when the WM cue matched the global level of the hierarchical stimulus, suggesting that WM biases attention to the global level of form. Interestingly, in a no-memory priming condition, target perception was faster in the invalid condition than in the neutral baseline, reversing the effect in the WM condition. A further control experiment ruled out the effects of WM being due to participants' refreshing their memory from the hierarchical stimulus display. The data show that information in WM biases the selection of hierarchical forms, whereas priming does not. Priming alters the perceptual processing of repeated stimuli without biasing attention.
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
NASA Astrophysics Data System (ADS)
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.
The Partition of Multi-Resolution LOD Based on Qtm
NASA Astrophysics Data System (ADS)
Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.
2011-08-01
The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.
Construction of Matryoshka-type structures from supercharged protein nanocages.
Beck, Tobias; Tetter, Stephan; Künzle, Matthias; Hilvert, Donald
2015-01-12
Designing nanoscaled hierarchical structures with increasing levels of complexity is challenging. Here we show that electrostatic interactions between two complementarily supercharged protein nanocages can be effectively utilized to create nested Matryoshka-type structures. Cage-within-cage complexes containing spatially ordered iron oxide nanoparticles spontaneously self-assemble upon mixing positively supercharged ferritin compartments with AaLS-13, a larger shell-forming protein with a negatively supercharged lumen. Exploiting engineered Coulombic interactions and protein dynamics in this way opens up new avenues for creating hierarchically organized supramolecular assemblies for application as delivery vehicles, reaction chambers, and artificial organelles. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-scale habitat selection modeling: A review and outlook
Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman
2016-01-01
Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
Framework for computing the spatial coherence effects of polycapillary x-ray optics
Zysk, Adam M.; Schoonover, Robert W.; Xu, Qiaofeng; Anastasio, Mark A.
2012-01-01
Despite the extensive use of polycapillary x-ray optics for focusing and collimating applications, there remains a significant need for characterization of the coherence properties of the output wavefield. In this work, we present the first quantitative computational method for calculation of the spatial coherence effects of polycapillary x-ray optical devices. This method employs the coherent mode decomposition of an extended x-ray source, geometric optical propagation of individual wavefield modes through a polycapillary device, output wavefield calculation by ray data resampling onto a uniform grid, and the calculation of spatial coherence properties by way of the spectral degree of coherence. PMID:22418154
Landsat analysis of tropical forest succession employing a terrain model
NASA Technical Reports Server (NTRS)
Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.
1980-01-01
Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.
Using home buyers' revealed preferences to define the urban rural fringe
NASA Astrophysics Data System (ADS)
Lesage, James P.; Charles, Joni S.
2008-03-01
The location of new homes defines the urban rural fringe and determines many facets of the urban rural interaction set in motion by construction of new homes in previously rural areas. Home, neighborhood and school district characteristics play a crucial role in determining the spatial location of new residential construction, which in turn defines the boundary and spatial extent of the urban rural fringe. We develop and apply a spatial hedonic variant of the Blinder (J Hum Resour 8:436 455, 1973) and Oaxaca (Int Econ Rev 9:693 709, 1973) price decomposition to newer versus older home sales in the Columbus, Ohio metropolitan area during the year 2000. The preferences of buyers of newer homes are compared to those who purchased the nearest neighboring older home located in the same census block group, during the same year. Use of the nearest older home purchased in the same location represents a methodology to control for various neighborhood, social economic-demographic and school district characteristics that influence home prices. Since newer homes reflect current preferences for home characteristics while older homes reflect past preferences for these characteristics, we use the price differentials between newer and older home sales in the Blinder Oaxaca decomposition to assess the relative significance of various house characteristics to home buyers.
Investigation of coherent structures in a superheated jet using decomposition methods
NASA Astrophysics Data System (ADS)
Sinha, Avick; Gopalakrishnan, Shivasubramanian; Balasubramanian, Sridhar
2016-11-01
A superheated turbulent jet, commonly encountered in many engineering flows, is complex two phase mixture of liquid and vapor. The superposition of temporally and spatially evolving coherent vortical motions, known as coherent structures (CS), govern the dynamics of such a jet. Both POD and DMD are employed to analyze such vortical motions. PIV data is used in conjunction with the decomposition methods to analyze the CS in the flow. The experiments were conducted using water emanating into a tank containing homogeneous fluid at ambient condition. Three inlet pressure were employed in the study, all at a fixed inlet temperature. 90% of the total kinetic energy in the mean flow is contained within the first five modes. The scatterplot for any two POD coefficients predominantly showed a circular distribution, representing a strong connection between the two modes. We speculate that the velocity and vorticity contours of spatial POD basis functions show presence of K-H instability in the flow. From DMD, eigenvalues away from the origin is observed for all the cases indicating the presence of a non-oscillatory structure. Spatial structures are also obtained from DMD. The authors are grateful to Confederation of Indian Industry and General Electric India Pvt. Ltd. for partial funding of this project.
What Matters in Education: A Decomposition of Educational Outcomes with Multiple Measures
ERIC Educational Resources Information Center
Li, Jinjing; Miranti, Riyana; Vidyattama, Yogi
2017-01-01
Significant variations in educational outcomes across both the spatial and socioeconomic spectra in Australia have been widely debated by policymakers in recent years. This paper examines these variations and decomposes educational outcomes into 3 major input factors: availability of school resources, socioeconomic background, and a latent factor…
Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo
2011-01-01
Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.
Su, Guijin; Liu, Yexuan; Huang, Linyan; Lu, Huijie; Liu, Sha; Li, Liewu; Zheng, Minghui
2014-03-01
An ethylene-glycol (EG) mediated self-assembly process was firstly developed to synthesize micrometer-sized nanostructured Mg-doped Fe3O4 composite oxides to decompose hexachlorobenzene (HCB) at 300°C. The synthesized samples were characterized by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, energy dispersive X-ray spectroscopy and inductively coupled plasma optical emission spectrometer. The morphology and composition of the composite oxide precursor were regulated by the molar ratio of the magnesium acetate and ferric nitrate as the reactants. Calcination of the precursor particles, prepared with different molar ratio of the metal salts, under a reducing nitrogen atmosphere, generated three kinds of Mg doped Fe3O4 composite oxide micro/nano materials. Their reactivity toward HCB decomposition was likely influenced by the material morphology and content of Mg dopants. Ball-like MgFe2O4-Fe3O4 composite oxide micro/nano material showed superior HCB dechlorination efficiencies when compared with pure Fe3O4 micro/nano material, prepared under similar experimental conditions, thus highlighting the benefits of doping Mg into Fe3O4 matrices. Copyright © 2013 Elsevier Ltd. All rights reserved.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Filho, Humberto A; Machicao, Jeaneth; Bruno, Odemir M
2018-01-01
Modeling the basic structure of metabolic machinery is a challenge for modern biology. Some models based on complex networks have provided important information regarding this machinery. In this paper, we constructed metabolic networks of 17 plants covering unicellular organisms to more complex dicotyledonous plants. The metabolic networks were built based on the substrate-product model and a topological percolation was performed using the kcore decomposition. The distribution of metabolites across the percolation layers showed correlations between the metabolic integration hierarchy and the network topology. We show that metabolites concentrated in the internal network (maximum kcore) only comprise molecules of the primary basal metabolism. Moreover, we found a high proportion of a set of common metabolites, among the 17 plants, centered at the inner kcore layers. Meanwhile, the metabolites recognized as participants in the secondary metabolism of plants are concentrated in the outermost layers of the network. This data suggests that the metabolites in the central layer form a basic molecular module in which the whole plant metabolism is anchored. The elements from this central core participate in almost all plant metabolic reactions, which suggests that plant metabolic networks follows a centralized topology.
Filho, Humberto A.; Machicao, Jeaneth
2018-01-01
Modeling the basic structure of metabolic machinery is a challenge for modern biology. Some models based on complex networks have provided important information regarding this machinery. In this paper, we constructed metabolic networks of 17 plants covering unicellular organisms to more complex dicotyledonous plants. The metabolic networks were built based on the substrate-product model and a topological percolation was performed using the kcore decomposition. The distribution of metabolites across the percolation layers showed correlations between the metabolic integration hierarchy and the network topology. We show that metabolites concentrated in the internal network (maximum kcore) only comprise molecules of the primary basal metabolism. Moreover, we found a high proportion of a set of common metabolites, among the 17 plants, centered at the inner kcore layers. Meanwhile, the metabolites recognized as participants in the secondary metabolism of plants are concentrated in the outermost layers of the network. This data suggests that the metabolites in the central layer form a basic molecular module in which the whole plant metabolism is anchored. The elements from this central core participate in almost all plant metabolic reactions, which suggests that plant metabolic networks follows a centralized topology. PMID:29734359
Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.
Demerdash, Omar N A; Mitchell, Julie C
2012-07-01
Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.
Can an unbroken flavour symmetry provide an approximate description of lepton masses and mixing?
NASA Astrophysics Data System (ADS)
Reyimuaji, Y.; Romanino, A.
2018-03-01
We provide a complete answer to the following question: what are the flavour groups and representations providing, in the symmetric limit, an approximate description of lepton masses and mixings? We assume that neutrino masses are described by the Weinberg operator. We show that the pattern of lepton masses and mixings only depends on the dimension, type (real, pseudoreal, complex), and equivalence of the irreducible components of the flavour representation, and we find only six viable cases. In all cases the neutrinos are either anarchical or have an inverted hierarchical spectrum. In the context of SU(5) unification, only the anarchical option is allowed. Therefore, if the hint of a normal hierarchical spectrum were confirmed, we would conclude (under the above assumption) that symmetry breaking effects must play a leading order role in the understanding of neutrino flavour observables. In order to obtain the above results, we develop a simple algorithm to determine the form of the lepton masses and mixings directly from the structure of the decomposition of the flavour representation in irreducible components, without the need to specify the form of the lepton mass matrices.
SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, R; Carson, J
2014-06-15
Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6)more » or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.« less
NASA Astrophysics Data System (ADS)
Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer
2018-01-01
Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their preferential driving influence on fibrillatory modes.
Level III Ecoregions of the Mississippi Alluvial Plain
Ecoregions for the Mississippi Alluvial Plain were extracted from the seamless national shapefile. Ecoregions denote areas of general similarity in ecosystems and in the type, quality, and quantity of environmental resources. They are designed to serve as a spatial framework for the research, assessment, management, and monitoring of ecosystems and ecosystem components. By recognizing the spatial differences in the capacities and potentials of ecosystems, ecoregions stratify the environment by its probable response to disturbance (Bryce and others, 1999). These general purpose regions are critical for structuring and implementing ecosystem management strategies across federal agencies, state agencies, and non-government organizations that are responsible for different types of resources within the same geographical areas (Omernik and others, 2000). The approach used to compile this map is based on the premise that ecological regions can be identified through the analysis of the spatial patterns and the composition of biotic and abiotic phenomena that affect or reflect differences in ecosystem quality and integrity (Wiken, 1986; Omernik, 1987, 1995). These phenomena include geology, physiography, vegetation, climate, soils, land use, wildlife, and hydrology. The relative importance of each characteristic varies from one ecological region to another regardless of the hierarchical level. A Roman numeral hierarchical scheme has been adopted for different levels for
Level IV Ecoregions of the Mississippi Alluvial Plain
Ecoregions for the Mississippi Alluvial Plain were extracted from the seamless national shapefile. Ecoregions denote areas of general similarity in ecosystems and in the type, quality, and quantity of environmental resources. They are designed to serve as a spatial framework for the research, assessment, management, and monitoring of ecosystems and ecosystem components. By recognizing the spatial differences in the capacities and potentials of ecosystems, ecoregions stratify the environment by its probable response to disturbance (Bryce and others, 1999). These general purpose regions are critical for structuring and implementing ecosystem management strategies across federal agencies, state agencies, and non-government organizations that are responsible for different types of resources within the same geographical areas (Omernik and others, 2000). The approach used to compile this map is based on the premise that ecological regions can be identified through the analysis of the spatial patterns and the composition of biotic and abiotic phenomena that affect or reflect differences in ecosystem quality and integrity (Wiken, 1986; Omernik, 1987, 1995). These phenomena include geology, physiography, vegetation, climate, soils, land use, wildlife, and hydrology. The relative importance of each characteristic varies from one ecological region to another regardless of the hierarchical level. A Roman numeral hierarchical scheme has been adopted for different levels for
Kim, Misun; Maguire, Eleanor A
2018-05-01
Humans commonly operate within 3D environments such as multifloor buildings and yet there is a surprising dearth of studies that have examined how these spaces are represented in the brain. Here, we had participants learn the locations of paintings within a virtual multilevel gallery building and then used behavioral tests and fMRI repetition suppression analyses to investigate how this 3D multicompartment space was represented, and whether there was a bias in encoding vertical and horizontal information. We found faster response times for within-room egocentric spatial judgments and behavioral priming effects of visiting the same room, providing evidence for a compartmentalized representation of space. At the neural level, we observed a hierarchical encoding of 3D spatial information, with left anterior hippocampus representing local information within a room, while retrosplenial cortex, parahippocampal cortex, and posterior hippocampus represented room information within the wider building. Of note, both our behavioral and neural findings showed that vertical and horizontal location information was similarly encoded, suggesting an isotropic representation of 3D space even in the context of a multicompartment environment. These findings provide much-needed information about how the human brain supports spatial memory and navigation in buildings with numerous levels and rooms.
Improved Forecasting of Next Day Ozone Concentrations in the Eastern U.S.
There is an urgent need to provide accurate air quality information and forecasts to the general public. A hierarchical space-time model is used to forecast next day spatial patterns of daily maximum 8-hr ozone concentrations. The model combines ozone monitoring data and gridded...
1985-05-15
The -. . . . . . . . . . ..- L 12 example, Hughes & Zimba (1985) have argued that attention acts simply by inhibiting the hemifield to which one is...and control of attention. Brain 104, 1981, 861-872. Hughes, H.C. & Zimba , L.D. Spatial maps of directed attention. Paper presented to the Psychonmics
Psycho-Social Determinants of Gender Prejudice in Science, Technology, Engineering and Mathematics
ERIC Educational Resources Information Center
Nnachi, N. O.; Okpube, M. N.
2015-01-01
This work focused on the "Psycho-social Determinants of Gender Prejudice in Science, Technology, Engineering and Mathematics (STEM)". The females were found to be underrepresented in STEM fields. The under-representation results from gender stereotype, differences in spatial skills, hierarchical and territorial segregations and…
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2010-01-01
A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.
Environmental science and management are fed by individual studies of pollution effects, often focused on single locations. Data are encountered data, typically from multiple sources and on different time and spatial scales. Statistical issues including publication bias and m...
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick
2018-01-01
Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
NASA Astrophysics Data System (ADS)
Love, C. A.; Skahill, B. E.; AghaKouchak, A.; Karlovits, G. S.; England, J. F.; Duren, A. M.
2017-12-01
We compare gridded extreme precipitation return levels obtained using spatial Bayesian hierarchical modeling (BHM) with their respective counterparts from a traditional regional frequency analysis (RFA) using the same set of extreme precipitation data. Our study area is the 11,478 square mile Willamette River basin (WRB) located in northwestern Oregon, a major tributary of the Columbia River whose 187 miles long main stem, the Willamette River, flows northward between the Coastal and Cascade Ranges. The WRB contains approximately two thirds of Oregon's population and 20 of the 25 most populous cities in the state. The U.S. Army Corps of Engineers (USACE) Portland District operates thirteen dams and extreme precipitation estimates are required to support risk informed hydrologic analyses as part of the USACE Dam Safety Program. Our intent is to profile for the USACE an alternate methodology to an RFA that was developed in 2008 due to the lack of an official NOAA Atlas 14 update for the state of Oregon. We analyze 24-hour annual precipitation maxima data for the WRB utilizing the spatial BHM R package "spatial.gev.bma", which has been shown to be efficient in developing coherent maps of extreme precipitation by return level. Our BHM modeling analysis involved application of leave-one-out cross validation (LOO-CV), which not only supported model selection but also a comprehensive assessment of location specific model performance. The LOO-CV results will provide a basis for the BHM RFA comparison.
NASA Astrophysics Data System (ADS)
Rupa, Chandra; Mujumdar, Pradeep
2016-04-01
In urban areas, quantification of extreme precipitation is important in the design of storm water drains and other infrastructure. Intensity Duration Frequency (IDF) relationships are generally used to obtain design return level for a given duration and return period. Due to lack of availability of extreme precipitation data for sufficiently large number of years, estimating the probability of extreme events is difficult. Typically, a single station data is used to obtain the design return levels for various durations and return periods, which are used in the design of urban infrastructure for the entire city. In an urban setting, the spatial variation of precipitation can be high; the precipitation amounts and patterns often vary within short distances of less than 5 km. Therefore it is crucial to study the uncertainties in the spatial variation of return levels for various durations. In this work, the extreme precipitation is modeled spatially using the Bayesian hierarchical analysis and the spatial variation of return levels is studied. The analysis is carried out with Block Maxima approach for defining the extreme precipitation, using Generalized Extreme Value (GEV) distribution for Bangalore city, Karnataka state, India. Daily data for nineteen stations in and around Bangalore city is considered in the study. The analysis is carried out for summer maxima (March - May), monsoon maxima (June - September) and the annual maxima rainfall. In the hierarchical analysis, the statistical model is specified in three layers. The data layer models the block maxima, pooling the extreme precipitation from all the stations. In the process layer, the latent spatial process characterized by geographical and climatological covariates (lat-lon, elevation, mean temperature etc.) which drives the extreme precipitation is modeled and in the prior level, the prior distributions that govern the latent process are modeled. Markov Chain Monte Carlo (MCMC) algorithm (Metropolis Hastings algorithm within a Gibbs sampler) is used to obtain the samples of parameters from the posterior distribution of parameters. The spatial maps of return levels for specified return periods, along with the associated uncertainties, are obtained for the summer, monsoon and annual maxima rainfall. Considering various covariates, the best fit model is selected using Deviance Information Criteria. It is observed that the geographical covariates outweigh the climatological covariates for the monsoon maxima rainfall (latitude and longitude). The best covariates for summer maxima and annual maxima rainfall are mean summer precipitation and mean monsoon precipitation respectively, including elevation for both the cases. The scale invariance theory, which states that statistical properties of a process observed at various scales are governed by the same relationship, is used to disaggregate the daily rainfall to hourly scales. The spatial maps of the scale are obtained for the study area. The spatial maps of IDF relationships thus generated are useful in storm water designs, adequacy analysis and identifying the vulnerable flooding areas.
Collective helicity switching of a DNA-coat assembly
NASA Astrophysics Data System (ADS)
Kim, Yongju; Li, Huichang; He, Ying; Chen, Xi; Ma, Xiaoteng; Lee, Myongsoo
2017-07-01
Hierarchical assemblies of biomolecular subunits can carry out versatile tasks at the cellular level with remarkable spatial and temporal precision. As an example, the collective motion and mutual cooperation between complex protein machines mediate essential functions for life, such as replication, synthesis, degradation, repair and transport. Nucleic acid molecules are far less dynamic than proteins and need to bind to specific proteins to form hierarchical structures. The simplest example of these nucleic acid-based structures is provided by a rod-shaped tobacco mosaic virus, which consists of genetic material surrounded by coat proteins. Inspired by the complexity and hierarchical assembly of viruses, a great deal of effort has been devoted to design similarly constructed artificial viruses. However, such a wrapping approach makes nucleic acid dynamics insensitive to environmental changes. This limitation generally restricts, for example, the amplification of the conformational dynamics between the right-handed B form to the left-handed Z form of double-stranded deoxyribonucleic acid (DNA). Here we report a virus-like hierarchical assembly in which the native DNA and a synthetic coat undergo repeated collective helicity switching triggered by pH change under physiological conditions. We also show that this collective helicity inversion occurs during translocation of the DNA-coat assembly into intracellular compartments. Translating DNA conformational dynamics into a higher level of hierarchical dynamics may provide an approach to create DNA-based nanomachines.
Towards a hierarchical optimization modeling framework for ...
Background:Bilevel optimization has been recognized as a 2-player Stackelberg game where players are represented as leaders and followers and each pursue their own set of objectives. Hierarchical optimization problems, which are a generalization of bilevel, are especially difficult because the optimization is nested, meaning that the objectives of one level depend on solutions to the other levels. We introduce a hierarchical optimization framework for spatially targeting multiobjective green infrastructure (GI) incentive policies under uncertainties related to policy budget, compliance, and GI effectiveness. We demonstrate the utility of the framework using a hypothetical urban watershed, where the levels are characterized by multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities), and objectives include minimization of policy cost, implementation cost, and risk; reduction of combined sewer overflow (CSO) events; and improvement in environmental benefits such as reduced nutrient run-off and water availability. Conclusions: While computationally expensive, this hierarchical optimization framework explicitly simulates the interaction between multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities) and is especially useful for constructing and evaluating environmental and ecological policy. Using the framework with a hypothetical urba
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
Hierarchical spatial models of abundance and occurrence from imperfect survey data
Royle, J. Andrew; Kery, M.; Gautier, R.; Schmid, Hans
2007-01-01
Many estimation and inference problems arising from large-scale animal surveys are focused on developing an understanding of patterns in abundance or occurrence of a species based on spatially referenced count data. One fundamental challenge, then, is that it is generally not feasible to completely enumerate ('census') all individuals present in each sample unit. This observation bias may consist of several components, including spatial coverage bias (not all individuals in the Population are exposed to sampling) and detection bias (exposed individuals may go undetected). Thus, observations are biased for the state variable (abundance, occupancy) that is the object of inference. Moreover, data are often sparse for most observation locations, requiring consideration of methods for spatially aggregating or otherwise combining sparse data among sample units. The development of methods that unify spatial statistical models with models accommodating non-detection is necessary to resolve important spatial inference problems based on animal survey data. In this paper, we develop a novel hierarchical spatial model for estimation of abundance and occurrence from survey data wherein detection is imperfect. Our application is focused on spatial inference problems in the Swiss Survey of Common Breeding Birds. The observation model for the survey data is specified conditional on the unknown quadrat population size, N(s). We augment the observation model with a spatial process model for N(s), describing the spatial variation in abundance of the species. The model includes explicit sources of variation in habitat structure (forest, elevation) and latent variation in the form of a correlated spatial process. This provides a model-based framework for combining the spatially referenced samples while at the same time yielding a unified treatment of estimation problems involving both abundance and occurrence. We provide a Bayesian framework for analysis and prediction based on the integrated likelihood, and we use the model to obtain estimates of abundance and occurrence maps for the European Jay (Garrulus glandarius), a widespread, elusive, forest bird. The naive national abundance estimate ignoring imperfect detection and incomplete quadrat coverage was 77 766 territories. Accounting for imperfect detection added approximately 18 000 territories, and adjusting for coverage bias added another 131 000 territories to yield a fully corrected estimate of the national total of about 227 000 territories. This is approximately three times as high as previous estimates that assume every territory is detected in each quadrat.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Fabrication of hierarchical hybrid structures using bio-enabled layer-by-layer self-assembly.
Hnilova, Marketa; Karaca, Banu Taktak; Park, James; Jia, Carol; Wilson, Brandon R; Sarikaya, Mehmet; Tamerler, Candan
2012-05-01
Development of versatile and flexible assembly systems for fabrication of functional hybrid nanomaterials with well-defined hierarchical and spatial organization is of a significant importance in practical nanobiotechnology applications. Here we demonstrate a bio-enabled self-assembly technique for fabrication of multi-layered protein and nanometallic assemblies utilizing a modular gold-binding (AuBP1) fusion tag. To accomplish the bottom-up assembly we first genetically fused the AuBP1 peptide sequence to the C'-terminus of maltose-binding protein (MBP) using two different linkers to produce MBP-AuBP1 hetero-functional constructs. Using various spectroscopic techniques, surface plasmon resonance (SPR) and localized surface plasmon resonance (LSPR), we verified the exceptional binding and self-assembly characteristics of AuBP1 peptide. The AuBP1 peptide tag can direct the organization of recombinant MBP protein on various gold surfaces through an efficient control of the organic-inorganic interface at the molecular level. Furthermore using a combination of soft-lithography, self-assembly techniques and advanced AuBP1 peptide tag technology, we produced spatially and hierarchically controlled protein multi-layered assemblies on gold nanoparticle arrays with high molecular packing density and pattering efficiency in simple, reproducible steps. This model system offers layer-by-layer assembly capability based on specific AuBP1 peptide tag and constitutes novel biological routes for biofabrication of various protein arrays, plasmon-active nanometallic assemblies and devices with controlled organization, packing density and architecture. Copyright © 2011 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Mykrä, Heikki; Heino, Jani; Muotka, Timo
2004-09-01
Streams are naturally hierarchical systems, and their biota are affected by factors effective at regional to local scales. However, there have been only a few attempts to quantify variation in ecological attributes across multiple spatial scales. We examined the variation in several macroinvertebrate metrics and environmental variables at three hierarchical scales (ecoregions, drainage systems, streams) in boreal headwater streams. In nested analyses of variance, significant spatial variability was observed for most of the macroinvertebrate metrics and environmental variables examined. For most metrics, ecoregions explained more variation than did drainage systems. There was, however, much variation attributable to residuals, suggesting high among-stream variation in macroinvertebrate assemblage characteristics. Nonmetric multidimensional scaling (NMDS) and multiresponse permutation procedure (MRPP) showed that assemblage composition differed significantly among both drainage systems and ecoregions. The associated R-statistics were, however, very low, indicating wide variation among sites within the defined landscape classifications. Regional delineations explained most of the variation in stream water chemistry, ecoregions being clearly more influential than drainage systems. For physical habitat characteristics, by contrast, the among-stream component was the major source of variation. Distinct differences attributable to stream size were observed for several metrics, especially total number of taxa and abundance of algae-scraping invertebrates. Although ecoregions clearly account for a considerable amount of variation in macroinvertebrate assemblage characteristics, we suggest that a three-tiered classification system (stratification through ecoregion and habitat type, followed by assemblage prediction within these ecologically meaningful units) will be needed for effective bioassessment of boreal running waters.
Chalmers, Eric; Luczak, Artur; Gruber, Aaron J.
2016-01-01
The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks. PMID:28018203
Hierarchical human action recognition around sleeping using obscured posture information
NASA Astrophysics Data System (ADS)
Kudo, Yuta; Sashida, Takehiko; Aoki, Yoshimitsu
2015-04-01
This paper presents a new approach for human action recognition around sleeping with the human body parts locations and the positional relationship between human and sleeping environment. Body parts are estimated from the depth image obtained by a time-of-flight (TOF) sensor using oriented 3D normal vector. Issues in action recognition of sleeping situation are the demand of availability in darkness, and hiding of the human body by duvets. Therefore, the extraction of image features is difficult since color and edge features are obscured by covers. Thus, first in our method, positions of four parts of the body (head, torso, thigh, and lower leg) are estimated by using the shape model of bodily surface constructed by oriented 3D normal vector. This shape model can represent the surface shape of rough body, and is effective in robust posture estimation of the body hidden with duvets. Then, action descriptor is extracted from the position of each body part. The descriptor includes temporal variation of each part of the body and spatial vector of position of the parts and the bed. Furthermore, this paper proposes hierarchical action classes and classifiers to improve the indistinct action classification. Classifiers are composed of two layers, and recognize human action by using the action descriptor. First layer focuses on spatial descriptor and classifies action roughly. Second layer focuses on temporal descriptor and classifies action finely. This approach achieves a robust recognition of obscured human by using the posture information and the hierarchical action recognition.
Araki, Kiwako S; Kubo, Takuya; Kudoh, Hiroshi
2017-01-01
In sessile organisms such as plants, spatial genetic structures of populations show long-lasting patterns. These structures have been analyzed across diverse taxa to understand the processes that determine the genetic makeup of organismal populations. For many sessile organisms that mainly propagate via clonal spread, epigenetic status can vary between clonal individuals in the absence of genetic changes. However, fewer previous studies have explored the epigenetic properties in comparison to the genetic properties of natural plant populations. Here, we report the simultaneous evaluation of the spatial structure of genetic and epigenetic variation in a natural population of the clonal plant Cardamine leucantha. We applied a hierarchical Bayesian model to evaluate the effects of membership of a genet (a group of individuals clonally derived from a single seed) and vegetation cover on the epigenetic variation between ramets (clonal plants that are physiologically independent individuals). We sampled 332 ramets in a 20 m × 20 m study plot that contained 137 genets (identified using eight SSR markers). We detected epigenetic variation in DNA methylation at 24 methylation-sensitive amplified fragment length polymorphism (MS-AFLP) loci. There were significant genet effects at all 24 MS-AFLP loci in the distribution of subepiloci. Vegetation cover had no statistically significant effect on variation in the majority of MS-AFLP loci. The spatial aggregation of epigenetic variation is therefore largely explained by the aggregation of ramets that belong to the same genets. By applying hierarchical Bayesian analyses, we successfully identified a number of genet-specific changes in epigenetic status within a natural plant population in a complex context, where genotypes and environmental factors are unevenly distributed. This finding suggests that it requires further studies on the spatial epigenetic structure of natural populations of diverse organisms, particularly for sessile clonal species.
Time Frequency Analysis and Spatial Filtering in the Evaluation of Beta ERS After Finger Movement
2001-10-25
Italy. 5IRCCS Fondazione Santa Lucia , via Ardeatina 306, Roma, Italy Fig. 1 Scheme of the Wavelet Packet decomposition. The gray boxes represent...surface splines. J. Aircraft, 1972, 9: 189-191. [8]Maceri, B., Magnone, S., Bianchi, A., Cerutti, S. Studio della decomposizione wavelet dei segnali
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A novel method of identifying motor primitives using wavelet decomposition*
Popov, Anton; Olesh, Erienne V.; Yakovenko, Sergiy; Gritsenko, Valeriya
2018-01-01
This study reports a new technique for extracting muscle synergies using continuous wavelet transform. The method allows to quantify coincident activation of muscle groups caused by the physiological processes of fixed duration, thus enabling the extraction of wavelet modules of arbitrary groups of muscles. Hierarchical clustering and identification of the repeating wavelet modules across subjects and across movements, was used to identify consistent muscle synergies. Results indicate that the most frequently repeated wavelet modules comprised combinations of two muscles that are not traditional agonists and span different joints. We have also found that these wavelet modules were flexibly combined across different movement directions in a pattern resembling directional tuning. This method is extendable to multiple frequency domains and signal modalities.
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
NASA Astrophysics Data System (ADS)
Smilowitz, L.; Henson, B. F.; Romero, J. J.; Asay, B. W.; Saunders, A.; Merrill, F. E.; Morris, C. L.; Kwiatkowski, K.; Grim, G.; Mariam, F.; Schwartz, C. L.; Hogan, G.; Nedrow, P.; Murray, M. M.; Thompson, T. N.; Espinoza, C.; Lewis, D.; Bainbridge, J.; McNeil, W.; Rightley, P.; Marr-Lyon, M.
2012-05-01
We report proton transmission images obtained during direct heating of a sample of PBX 9501 (a plastic bonded formulation of the explosive nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX)) prior to the ignition of a thermal explosion. We describe the application of proton radiography using the 800 MeV proton accelerator at Los Alamos National Laboratory to obtain transmission images in these thermal explosion experiments. We have obtained images at two spatial magnifications and viewing both the radial and the transverse axes of a solid cylindrical sample encased in aluminum. During heating we observe the slow evolution of proton transmission through the samples, with particular detail during material flow associated with the HMX β-δ phase transition. We also directly observe the loss of solid density to decomposition associated with elevated temperatures in the volume defining the ignition location in these experiments. We measure a diameter associated with this volume of 1-2 mm, in agreement with previous estimations of the diameter using spatially resolved fast thermocouples.
NASA Astrophysics Data System (ADS)
Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi
2018-03-01
This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkoff, T. J., E-mail: adidasty@gmail.com
We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology formore » generating entanglement between spatially separated electromagnetic field modes.« less
Spatially coupled catalytic ignition of CO oxidation on Pt: mesoscopic versus nano-scale
Spiel, C.; Vogel, D.; Schlögl, R.; Rupprechter, G.; Suchorski, Y.
2015-01-01
Spatial coupling during catalytic ignition of CO oxidation on μm-sized Pt(hkl) domains of a polycrystalline Pt foil has been studied in situ by PEEM (photoemission electron microscopy) in the 10−5 mbar pressure range. The same reaction has been examined under similar conditions by FIM (field ion microscopy) on nm-sized Pt(hkl) facets of a Pt nanotip. Proper orthogonal decomposition (POD) of the digitized FIM images has been employed to analyze spatiotemporal dynamics of catalytic ignition. The results show the essential role of the sample size and of the morphology of the domain (facet) boundary in the spatial coupling in CO oxidation. PMID:26021411
The spatial equity principle in the administrative division of the Central European countries
Klapka, Pavel; Bačík, Vladimír; Klobučník, Michal
2017-01-01
The paper generally builds on the concept of justice in social science. It attempts to interpret this concept in a geographical and particularly in a spatial context. The paper uses the concept of accessibility to define the principle of spatial equity. The main objective of the paper is to propose an approach with which to assess the level of spatial equity in the administrative division of a territory. In order to fulfil this objective the paper theoretically discusses the concept of spatial equity and relates it to other relevant concepts, such as spatial efficiency. The paper proposes some measures of spatial equity and uses the territory of four Central European countries (Austria, the Czech Republic, Hungary, Slovakia) as example of the application of the proposed measures and the corroboration of the proposed approach. The analysis is based on the administrative division of four countries and is carried out at different hierarchical levels as defined by the Nomenclature of Units for Territorial Statistics (NUTS). PMID:29091953
Mathematical Methods of System Analysis in Construction Materials
NASA Astrophysics Data System (ADS)
Garkina, Irina; Danilov, Alexander
2017-10-01
System attributes of construction materials are defined: complexity of an object, integrity of set of elements, existence of essential, stable relations between elements defining integrative properties of system, existence of structure, etc. On the basis of cognitive modelling (intensive and extensive properties; the operating parameters) materials (as difficult systems) and creation of the cognitive map the hierarchical modular structure of criteria of quality is under construction. It actually is a basis for preparation of the specification on development of material (the required organization and properties). Proceeding from a modern paradigm (model of statement of problems and their decisions) of development of materials, levels and modules are specified in structure of material. It when using the principles of the system analysis allows to considered technological process as the difficult system consisting of elements of the distinguished specification level: from atomic before separate process. Each element of system depending on an effective objective is considered as separate system with more detailed levels of decomposition. Among them, semantic and qualitative analyses of an object (are considered a research objective, decomposition levels, separate elements and communications between them come to light). Further formalization of the available knowledge in the form of mathematical models (structural identification) is carried out; communications between input and output parameters (parametrical identification) are defined. Hierarchical structures of criteria of quality are under construction for each allocated level. On her the relevant hierarchical structures of system (material) are under construction. Regularities of structurization and formation of properties, generally are considered at the levels from micro to a macrostructure. The mathematical model of material is represented as set of the models corresponding to private criteria by which separate modules and their levels (the mathematical description, a decision algorithm) are defined. Adequacy is established (compliance of results of modelling to experimental data; is defined by the level of knowledge of process and validity of the accepted assumptions). The global criterion of quality of material is considered as a set of private criteria (properties). Synthesis of material is carried out on the basis of one-criteria optimization on each of the chosen private criteria. Results of one-criteria optimization are used at multicriteria optimization. The methods of developing materials as single-purpose, multi-purpose, including contradictory, systems are indicated. The scheme of synthesis of composite materials as difficult systems is developed. The specified system approach effectively was used in case of synthesis of composite materials with special properties.
Kristen K. Cecala; John C. Maerz; Brian J. Halstead; John R. Frisch; Ted L. Gragson; Jeffrey Hepinstall-Cymerman; David S. Leigh; C. Rhett Jackson; James T. Peterson; Catherine M. Pringle
2018-01-01
Understanding how factors that vary in spatial scale relate to population abundance is vital to forecasting species responses to environmental change. Stream and river ecosystems are inherently hierarchical, potentially resulting in organismal responses to fineâscale changes in patch characteristics that are conditional on the watershed context. Here, we...
Adding a landscape ecology perspective to conservation and management planning
Kathryn E. Freemark; John R. Probst; John B. Dunning; Salllie J. Hejl
1993-01-01
We briefly review concepts in landscape ecology and discuss their relevance to the conservation and management of neotropical migrant landbirds. We then integrate a landscape perspective into a spatially-hierarchical framework for conservation and management planning for neotropical migrant landbirds (and other biota). The framework outlines a comprehensive approach by...
Background/Question/Methods Many environmental factors influence human mortality simultaneously. However, assessing their cumulative effects remains a challenging task. In this study we used the Environmental Quality Index (EQI), developed by the U.S. EPA, as a measure of overall...
Hierarchical analysis of species distributions and abundance across environmental gradients
Jeffery Diez; Ronald H. Pulliam
2007-01-01
Abiotic and biotic processes operate at multiple spatial and temporal scales to shape many ecological processes, including species distributions and demography. Current debate about the relative roles of niche-based and stochastic processes in shaping species distributions and community composition reflects, in part, the challenge of understanding how these processes...
Bryce A. Richardson; Ned B. Klopfenstein; Steven J. Brunsfeld
2002-01-01
Maternally inherited mitochondrial DNA haplotypes in whitebark pine (Pinus albicaulis Engelm.) were used to examine the maternal genetic structure at three hierarchical spatial scales: fine scale, coarse scale, and interpopulation. These data were used to draw inferences into Clarkâs nutcracker (Nucifraga columbiana Wilson)...
Disturbance patterns in a socio-ecological system at multiple scales
G. Zurlini; Kurt H. Riitters; N. Zaccarelli; I. Petrosillo; K.B. Jones; L. Rossi
2006-01-01
Ecological systems with hierarchical organization and non-equilibrium dynamics require multiple-scale analyses to comprehend how a system is structured and to formulate hypotheses about regulatory mechanisms. Characteristic scales in real landscapes are determined by, or at least reflect, the spatial patterns and scales of constraining human interactions with the...
Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...
Predicting plant species diversity in a longleaf pine landscape
L. Katherine Kirkman; P. Charles Goebel; Brian J. Palik; Larry T. West
2004-01-01
In this study, we used a hierarchical, multifactor ecological classification system to examine how spatial patterns of biodiversity develop in one of the most species-rich ecosystems in North America, the fire-maintained longleaf pine-wiregrass ecosystem and associated depressional wetlands and riparian forests. Our goal was to determine which landscape features are...
Drawing the Line between Constituent Structure and Coherence Relations in Visual Narratives
ERIC Educational Resources Information Center
Cohn, Neil; Bender, Patrick
2017-01-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the…
In this article we describe an approach for predicting average hourly concentrations of ambient PM10 in Vancouver. We know our solution also applies to hourly ozone fields and believe it may be quite generally applicable. We use a hierarchal Bayesian approach. At the primary ...
Martín-Loeches, M; Hinojosa, J A; Rubia, F J
1999-11-01
The temporal and hierarchical relationships between the dorsal and the ventral streams in selective attention are known only in relation to the use of spatial location as the attentional cue mediated by the dorsal stream. To improve this state of affairs, event-related brain potentials were recorded while subjects attended simultaneously to motion direction (mediated by the dorsal stream) and to a property mediated by the ventral stream (color or shape). At about the same time, a selection positivity (SP) started for attention mediated by both streams. However, the SP for color and shape peaked about 60 ms later than motion SP. Subsequently, a selection negativity (SN) followed by a late positive component (LPC) were found simultaneously for attention mediated by both streams. A hierarchical relationship between the two streams was not observed, but neither SN nor LPC for one property was completely insensitive to the values of the other property.
Unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance
Fiske, I.J.; Chandler, R.B.
2011-01-01
Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientic questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mecha- nisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unied modeling in- terface. The R package unmarked provides such a unied modeling framework, including tools for data exploration, model tting, model criticism, post-hoc analysis, and model comparison.
NASA Astrophysics Data System (ADS)
Zeng, Yi; Shen, Zhong-Hui; Shen, Yang; Lin, Yuanhua; Nan, Ce-Wen
2018-03-01
Flexible dielectric polymer films with high energy storage density and high charge-discharge efficiency have been considered as promising materials for electrical power applications. Here, we design hierarchical structured nanocomposite films using nonlinear polymer poly(vinylidene fluoride-HFP) [P(VDF-HFP)] with inorganic h-boron nitride (h-BN) nanosheets by electrospinning and hot-pressing methods. Our results show that the addition of h-BN nanosheets and the design of the hierarchical multilayer structure in the nanocomposites can remarkably enhance the charge-discharge efficiency and energy density. A high charge-discharge efficiency of 78% and an energy density of 21 J/cm3 can be realized in the 12-layered PVDF/h-BN nanocomposite films. Phase-field simulation results reveal that the spatial distribution of the electric field in these hierarchical structured films affects the charge-discharge efficiency and energy density. This work provides a feasible route, i.e., structure modulation, to improve the energy storage performances for nanocomposite films.
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Validation of Distributed Soil Moisture: Airborne Polarimetric SAR vs. Ground-based Sensor Networks
NASA Astrophysics Data System (ADS)
Jagdhuber, T.; Kohling, M.; Hajnsek, I.; Montzka, C.; Papathanassiou, K. P.
2012-04-01
The knowledge of spatially distributed soil moisture is highly desirable for an enhanced hydrological modeling in terms of flood prevention and for yield optimization in combination with precision farming. Especially in mid-latitudes, the growing agricultural vegetation results in an increasing soil coverage along the crop cycle. For a remote sensing approach, this vegetation influence has to be separated from the soil contribution within the resolution cell to extract the actual soil moisture. Therefore a hybrid decomposition was developed for estimation of soil moisture under vegetation cover using fully polarimetric SAR data. The novel polarimetric decomposition combines a model-based decomposition, separating the volume component from the ground components, with an eigen-based decomposition of the two ground components into a surface and a dihedral scattering contribution. Hence, this hybrid decomposition, which is based on [1,2], establishes an innovative way to retrieve soil moisture under vegetation. The developed inversion algorithm for soil moisture under vegetation cover is applied on fully polarimetric data of the TERENO campaign, conducted in May and June 2011 for the Rur catchment within the Eifel/Lower Rhine Valley Observatory. The fully polarimetric SAR data were acquired in high spatial resolution (range: 1.92m, azimuth: 0.6m) by DLR's novel F-SAR sensor at L-band. The inverted soil moisture product from the airborne SAR data is validated with corresponding distributed ground measurements for a quality assessment of the developed algorithm. The in situ measurements were obtained on the one hand by mobile FDR probes from agricultural fields near the towns of Merzenhausen and Selhausen incorporating different crop types and on the other hand by distributed wireless sensor networks (SoilNet clusters) from a grassland test site (near the town of Rollesbroich) and from a forest stand (within the Wüstebach sub-catchment). Each SoilNet cluster incorporates around 150 wireless measuring devices on a grid of approximately 30ha for distributed soil moisture sensing. Finally, the comparison of both distributed soil moisture products results in a discussion on potentials and limitations for obtaining soil moisture under vegetation cover with high resolution fully polarimetric SAR. [1] S.R. Cloude, Polarisation: applications in remote sensing. Oxford, Oxford University Press, 2010. [2] Jagdhuber, T., Hajnsek, I., Papathanassiou, K.P. and Bronstert, A.: A Hybrid Decomposition for Soil Moisture Estimation under Vegetation Cover Using Polarimetric SAR. Proc. of the 5th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, ESA-ESRIN, Frascati, Italy, January 24-28, 2011, p.1-6.
NASA Astrophysics Data System (ADS)
Pisana, Francesco; Henzler, Thomas; Schönberg, Stefan; Klotz, Ernst; Schmidt, Bernhard; Kachelrieß, Marc
2017-03-01
Dynamic CT perfusion acquisitions are intrinsically high-dose examinations, due to repeated scanning. To keep radiation dose under control, relatively noisy images are acquired. Noise is then further enhanced during the extraction of functional parameters from the post-processing of the time attenuation curves of the voxels (TACs) and normally some smoothing filter needs to be employed to better visualize any perfusion abnormality, but sacrificing spatial resolution. In this study we propose a new method to detect perfusion abnormalities keeping both high spatial resolution and high CNR. To do this we first perform the singular value decomposition (SVD) of the original noisy spatial temporal data matrix to extract basis functions of the TACs. Then we iteratively cluster the voxels based on a smoothed version of the three most significant singular vectors. Finally, we create high spatial resolution 3D volumes where to each voxel is assigned a distance from the centroid of each cluster, showing how functionally similar each voxel is compared to the others. The method was tested on three noisy clinical datasets: one brain perfusion case with an occlusion in the left internal carotid, one healthy brain perfusion case, and one liver case with an enhancing lesion. Our method successfully detected all perfusion abnormalities with higher spatial precision when compared to the functional maps obtained with a commercially available software. We conclude this method might be employed to have a rapid qualitative indication of functional abnormalities in low dose dynamic CT perfusion datasets. The method seems to be very robust with respect to both spatial and temporal noise and does not require any special a priori assumption. While being more robust respect to noise and with higher spatial resolution and CNR when compared to the functional maps, our method is not quantitative and a potential usage in clinical routine could be as a second reader to assist in the maps evaluation, or to guide a dataset smoothing before the modeling part.
Spatial distribution of enzyme activities along the root and in the rhizosphere of different plants
NASA Astrophysics Data System (ADS)
Razavi, Bahar S.; Zarebanadkouki, Mohsen; Blagodatskaya, Evgenia; Kuzyakov, Yakov
2015-04-01
Extracellular enzymes are important for decomposition of many biological macromolecules abundant in soil such as cellulose, hemicelluloses and proteins. Activities of enzymes produced by both plant roots and microbes are the primary biological drivers of organic matter decomposition and nutrient cycling. So far acquisition of in situ data about local activity of different enzymes in soil has been challenged. That is why there is an urgent need in spatially explicit methods such as 2-D zymography to determine the variation of enzymes along the roots in different plants. Here, we developed further the zymography technique in order to quantitatively visualize the enzyme activities (Spohn and Kuzyakov, 2013), with a better spatial resolution We grew Maize (Zea mays L.) and Lentil (Lens culinaris) in rhizoboxes under optimum conditions for 21 days to study spatial distribution of enzyme activity in soil and along roots. We visualized the 2D distribution of the activity of three enzymes:β-glucosidase, leucine amino peptidase and phosphatase, using fluorogenically labelled substrates. Spatial resolution of fluorescent images was improved by direct application of a substrate saturated membrane to the soil-root system. The newly-developed direct zymography shows different pattern of spatial distribution of enzyme activity along roots and soil of different plants. We observed a uniform distribution of enzyme activities along the root system of Lentil. However, root system of Maize demonstrated inhomogeneity of enzyme activities. The apical part of an individual root (root tip) in maize showed the highest activity. The activity of all enzymes was the highest at vicinity of the roots and it decreased towards the bulk soil. Spatial patterns of enzyme activities as a function of distance from the root surface were enzyme specific, with highest extension for phosphatase. We conclude that improved zymography is promising in situ technique to analyze, visualize and quantify spatial distribution of enzyme activities in the rhizosphere hotspots. References Spohn, M., Kuzyakov, Y., 2013. Phosphorus mineralization can be driven by microbial need for carbon. Soil Biology & Biochemistry 61: 69-75
The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG
NASA Astrophysics Data System (ADS)
Sun, S.; Liu, D.; Li, G.; Yu, W.
2011-08-01
The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.
Spatial Harmonic Decomposition as a tool for unsteady flow phenomena analysis
NASA Astrophysics Data System (ADS)
Duparchy, A.; Guillozet, J.; De Colombel, T.; Bornard, L.
2014-03-01
Hydropower is already the largest single renewable electricity source today but its further development will face new deployment constraints such as large-scale projects in emerging economies and the growth of intermittent renewable energy technologies. The potential role of hydropower as a grid stabilizer leads to operating hydro power plants in "off-design" zones. As a result, new methods of analyzing associated unsteady phenomena are needed to improve the design of hydraulic turbines. The key idea of the development is to compute a spatial description of a phenomenon by using a combination from several sensor signals. The spatial harmonic decomposition (SHD) extends the concept of so-called synchronous and asynchronous pulsations by projecting sensor signals on a linearly independent set of a modal scheme. This mathematical approach is very generic as it can be applied on any linear distribution of a scalar quantity defined on a closed curve. After a mathematical description of SHD, this paper will discuss the impact of instrumentation and provide tools to understand SHD signals. Then, as an example of a practical application, SHD is applied on a model test measurement in order to capture and describe dynamic pressure fields. Particularly, the spatial description of the phenomena provides new tools to separate the part of pressure fluctuations that contribute to output power instability or mechanical stresses. The study of the machine stability in partial load operating range in turbine mode or the comparison between the gap pressure field and radial thrust behavior during turbine brake operation are both relevant illustrations of SHD contribution.