DOE Office of Scientific and Technical Information (OSTI.GOV)
Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less
Multiscale modeling and simulation of brain blood flow
NASA Astrophysics Data System (ADS)
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2016-02-01
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
A critical review of nanotechnologies for composite aerospace structures
NASA Astrophysics Data System (ADS)
Kostopoulos, Vassilis; Masouras, Athanasios; Baltopoulos, Athanasios; Vavouliotis, Antonios; Sotiriadis, George; Pambaguian, Laurent
2017-03-01
The past decade extensive efforts have been invested in understanding the nano-scale and revealing the capabilities offered by nanotechnology products to structural materials. Integration of nano-particles into fiber composites concludes to multi-scale reinforced composites and has opened a new wide range of multi-functional materials in industry. In this direction, a variety of carbon based nano-fillers has been proposed and employed, individually or in combination in hybrid forms, to approach the desired performance. Nevertheless, a major issue faced lately more seriously due to the interest of industry is on how to incorporate these nano-species into the final composite structure through existing manufacturing processes and infrastructure. This interest originates from several industrial applications needs that request the development of new multi-functional materials which combine enhanced mechanical, electrical and thermal properties. In this work, an attempt is performed to review the most representative processes and related performances reported in literature and the experience obtained on nano-enabling technologies of fiber composite materials. This review focuses on the two main composite manufacturing technologies used by the aerospace industry; Prepreg/Autoclave and Resin Transfer technologies. It addresses several approaches for nano-enabling of composites for these two routes and reports latest achieved results focusing on performance of nano-enabled fiber reinforced composites extracted from literature. Finally, this review work identifies the gap between available nano-technology integration routes and the established industrial composite manufacturing techniques and the challenges to increase the Technology Readiness Level to reach the demands for aerospace industry applications.
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
3D Printed Bionic Nanodevices.
Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C
2016-06-01
The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the versatility of extrusion-based 3D printing technologies to interweave nanomaterials and fabricate novel bionic devices.
Kong, Yong Lin; Gupta, Maneesh K.; Johnson, Blake N.; McAlpine, Michael C.
2016-01-01
Summary The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and ‘living’ platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the versatility of extrusion-based 3D printing technologies to interweave nanomaterials and fabricate novel bionic devices. PMID:27617026
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
Quintero, Juliana; García-Betancourt, Tatiana; Caprara, Andrea; Basso, Cesar; Garcia da Rosa, Elsa; Manrique-Saide, Pablo; Coelho, Giovanini; Sánchez-Tejeda, Gustavo; Dzul-Manzanilla, Felipe; García, Diego Alejandro; Carrasquilla, Gabriel; Alfonso-Sierra, Eduardo; Monteiro Vasconcelos Motta, Cyntia; Sommerfeld, Johannes; Kroeger, Axel
2017-09-01
Prior to the current public health emergency following the emergence of chikungunya and Zika Virus Disease in the Americas during 2014 and 2015, multi-country research investigated between 2011 and 2013 the efficacy of novel Aedes aegypti intervention packages through cluster randomised controlled trials in four Latin-American cities: Fortaleza (Brazil); Girardot (Colombia), Acapulco (Mexico) and Salto (Uruguay). Results from the trials led to a scaling up effort of the interventions at city levels. Scaling up refers to deliberate efforts to increase the impact of successfully tested health interventions to benefit more people and foster policy and program development in a sustainable way. The different scenarios represent examples for a 'vertical approach' and a 'horizontal approach'. This paper presents the analysis of a preliminary process evaluation of the scaling up efforts in the mentioned cites, with a focus on challenges and enabling factors encountered by the research teams, analysing the main social, political, administrative, financial and acceptance factors.
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-01-01
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-09-14
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.
Progress in the Phase 0 Model Development of a STAR Concept for Dynamics and Control Testing
NASA Technical Reports Server (NTRS)
Woods-Vedeler, Jessica A.; Armand, Sasan C.
2003-01-01
The paper describes progress in the development of a lightweight, deployable passive Synthetic Thinned Aperture Radiometer (STAR). The spacecraft concept presented will enable the realization of 10 km resolution global soil moisture and ocean salinity measurements at 1.41 GHz. The focus of this work was on definition of an approximately 1/3-scaled, 5-meter Phase 0 test article for concept demonstration and dynamics and control testing. Design requirements, parameters and a multi-parameter, hybrid scaling approach for the dynamically scaled test model were established. The El Scaling Approach that was established allows designers freedom to define the cross section of scaled, lightweight structural components that is most convenient for manufacturing when the mass of the component is small compared to the overall system mass. Static and dynamic response analysis was conducted on analytical models to evaluate system level performance and to optimize panel geometry for optimal tension load distribution.
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Design of a Minimum Surface-Effect Three Degree-of-Freedom Micromanipulator
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Speich, John E.
1997-01-01
This paper describes the fundamental physical motivations for small-scale minimum surface-effect design, and presents a three degree-of-freedom micromanipulator design that incorporates a minimum surface-effect approach. The primary focus of the design is the split-tube flexure, a unique small-scale revolute joint that exhibits a considerably larger range of motion and significantly better multi-axis revolute joint characteristics than a conventional flexure. The development of this joint enables the implementation of a small-scale spatially-loaded revolute joint-based manipulator with well-behaved kinematic characteristics and without the backlash and stick-slip behavior that would otherwise prevent precision control
Adaptive Multi-scale Prognostics and Health Management for Smart Manufacturing Systems
Choo, Benjamin Y.; Adams, Stephen C.; Weiss, Brian A.; Marvel, Jeremy A.; Beling, Peter A.
2017-01-01
The Adaptive Multi-scale Prognostics and Health Management (AM-PHM) is a methodology designed to enable PHM in smart manufacturing systems. In application, PHM information is not yet fully utilized in higher-level decision-making in manufacturing systems. AM-PHM leverages and integrates lower-level PHM information such as from a machine or component with hierarchical relationships across the component, machine, work cell, and assembly line levels in a manufacturing system. The AM-PHM methodology enables the creation of actionable prognostic and diagnostic intelligence up and down the manufacturing process hierarchy. Decisions are then made with the knowledge of the current and projected health state of the system at decision points along the nodes of the hierarchical structure. To overcome the issue of exponential explosion of complexity associated with describing a large manufacturing system, the AM-PHM methodology takes a hierarchical Markov Decision Process (MDP) approach into describing the system and solving for an optimized policy. A description of the AM-PHM methodology is followed by a simulated industry-inspired example to demonstrate the effectiveness of AM-PHM. PMID:28736651
A statistical characterization of the Galileo-to-GPS inter-system bias
NASA Astrophysics Data System (ADS)
Gioia, Ciro; Borio, Daniele
2016-11-01
Global navigation satellite system operates using independent time scales and thus inter-system time offsets have to be determined to enable multi-constellation navigation solutions. GPS/Galileo inter-system bias and drift are evaluated here using different types of receivers: two mass market and two professional receivers. Moreover, three different approaches are considered for the inter-system bias determination: in the first one, the broadcast Galileo to GPS time offset is used to align GPS and Galileo time scales. In the second, the inter-system bias is included in the multi-constellation navigation solution and is estimated using the measurements available. Finally, an enhanced algorithm using constraints on the inter-system bias time evolution is proposed. The inter-system bias estimates obtained with the different approaches are analysed and their stability is experimentally evaluated using the Allan deviation. The impact of the inter-system bias on the position velocity time solution is also considered and the performance of the approaches analysed is evaluated in terms of standard deviation and mean errors for both horizontal and vertical components. From the experiments, it emerges that the inter-system bias is very stable and that the use of constraints, modelling the GPS/Galileo inter-system bias behaviour, significantly improves the performance of multi-constellation navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
NASA Astrophysics Data System (ADS)
Veitinger, Jochen; Purves, Ross Stuart; Sovilla, Betty
2016-10-01
Avalanche hazard assessment requires a very precise estimation of the release area, which still depends, to a large extent, on expert judgement of avalanche specialists. Therefore, a new algorithm for automated identification of potential avalanche release areas was developed. It overcomes some of the limitations of previous tools, which are currently not often applied in hazard mitigation practice. By introducing a multi-scale roughness parameter, fine-scale topography and its attenuation under snow influence is captured. This allows the assessment of snow influence on terrain morphology and, consequently, potential release area size and location. The integration of a wind shelter index enables the user to define release area scenarios as a function of the prevailing wind direction or single storm events. A case study illustrates the practical usefulness of this approach for the definition of release area scenarios under varying snow cover and wind conditions. A validation with historical data demonstrated an improved estimation of avalanche release areas. Our method outperforms a slope-based approach, in particular for more frequent avalanches; however, the application of the algorithm as a forecasting tool remains limited, as snowpack stability is not integrated. Future research activity should therefore focus on the coupling of the algorithm with snowpack conditions.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Sanga, Sandeep; Frieboes, Hermann B.; Zheng, Xiaoming; Gatenby, Robert; Bearer, Elaine L.; Cristini, Vittorio
2007-01-01
Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically review advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we propose and discuss a multi-scale, i.e., from the molecular to the gross tumor scale, mathematical and computational “first-principle” approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We demonstrate that this methodology, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as phenotype-diagnostic tool and thus to predict collective and individual tumor cell invasion of surrounding host. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior. PMID:17629503
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xuebing; Chen, Ting; Qi, Xintong
In this study, we developed a new method for in-situ pressure determination in multi-anvil, high-pressure apparatus using an acoustic travel time approach within the framework of acoustoelasticity. The ultrasonic travel times of polycrystalline Al{sub 2}O{sub 3} were calibrated against NaCl pressure scale up to 15 GPa and 900 °C in a Kawai-type double-stage multi-anvil apparatus in conjunction with synchrotron X-radiation, thereby providing a convenient and reliable gauge for pressure determination at ambient and high temperatures. The pressures derived from this new travel time method are in excellent agreement with those from the fixed-point methods. Application of this new pressure gauge in anmore » offline experiment revealed a remarkable agreement of the densities of coesite with those from the previous single crystal compression studies under hydrostatic conditions, thus providing strong validation for the current travel time pressure scale. The travel time approach not only can be used for continuous in-situ pressure determination at room temperature, high temperatures, during compression and decompression, but also bears a unique capability that none of the previous scales can deliver, i.e., simultaneous pressure and temperature determination with a high accuracy (±0.16 GPa in pressure and ±17 °C in temperature). Therefore, the new in-situ Al{sub 2}O{sub 3} pressure gauge is expected to enable new and expanded opportunities for offline laboratory studies of solid and liquid materials under high pressure and high temperature in multi-anvil apparatus.« less
NASA Astrophysics Data System (ADS)
Walther, Christian; Frei, Michaela
2017-04-01
Mining of so-called "conflict minerals" is often related with small-scale mining activities. The here discussed activities are located in forested areas in the eastern DRC, which are often remote, difficult to access and insecure for traditional geological field inspection. In order to accelerate their CTC (Certified Trading Chain)-certification process, remote sensing data are used for detection and monitoring of these small-scale mining operations. This requires a high image acquisition frequency due to mining site relocations and for compensation of year-round high cloud coverage, especially for optical data evaluation. Freely available medium resolution optical data of Sentinel-2 and Landsat-8 as well as SAR data of Sentinel-1 are used for detecting small mining targets with a minimum size of approximately 0.5 km2. The developed method enables a robust multi-temporal detection of mining sites, monitoring of mining site spatio-temporal relocations and environmental changes. Since qualitative and quantitative comparable results are generated, the followed change detection approach is objective and transparent and may push the certification process forward.
A multi-disciplinary approach for the structural monitoring of Cultural Heritages in a seismic area
NASA Astrophysics Data System (ADS)
Fabrizia Buongiorno, Maria; Musacchio, Massimo; Guerra, Ignazio; Porco, Giacinto; Stramondo, Salvatore; Casula, Giuseppe; Caserta, Arrigo; Speranza, Fabio; Doumaz, Fawzi; Giovanna Bianchi, Maria; Luzi, Guido; Ilaria Pannaccione Apa, Maria; Montuori, Antonio; Gaudiosi, Iolanda; Vecchio, Antonio; Gervasi, Anna; Bonali, Elena; Romano, Dolores; Falcone, Sergio; La Piana, Carmelo
2014-05-01
In the recent years, the concepts of seismic risk vulnerability and structural health monitoring have become very important topics in the field of both structural and civil engineering for the identification of appropriate risk indicators and risk assessment methodologies in Cultural Heritages monitoring. The latter, which includes objects, building and sites with historical, architectural and/or engineering relevance, concerns the management, the preservation and the maintenance of the heritages within their surrounding environmental context, in response to climate changes and natural hazards (e.g. seismic, volcanic, landslides and flooding hazards). Within such a framework, the complexity and the great number of variables to be considered require a multi-disciplinary approach including strategies, methodologies and tools able to provide an effective monitoring of Cultural Heritages form both scientific and operational viewpoints. Based on this rationale, in this study, an advanced, technological and operationally-oriented approach is presented and tested, which enables measuring and monitoring Cultural Heritage conservation state and geophysical/geological setting of the area, in order to mitigate the seismic risk of the historical public goods at different spatial scales*. The integration between classical geophysical methods with new emerging sensing techniques enables a multi-depth, multi-resolution, and multi-scale monitoring in both space and time. An integrated system of methodologies, instrumentation and data-processing approaches for non-destructive Cultural Heritage investigations is proposed, which concerns, in detail, the analysis of seismogenetic sources, the geological-geotechnical setting of the area and site seismic effects evaluation, proximal remote sensing techniques (e.g. terrestrial laser scanner, ground-based radar systems, thermal cameras), high-resolution aerial and satellite-based remote sensing methodologies (e.g. aeromagnetic surveys, synthetic aperture radar, optical, multispectral and panchromatic measurements), static and dynamic structural health monitoring analysis (e.g. screening tests with georadar, sonic instruments, sclerometers and optic fibers). The final purpose of the proposed approach is the development of an investigation methodology for short- and long-term Cultural Heritages preservation in response to seismic stress, which has specific features of scalability, modularity and exportability for every possible monitoring configuration. Moreover, it allows gathering useful information to furnish guidelines for Institution and local Administration to plan consolidation actions and therefore prevention activity. Some preliminary results will be presented for the test site of Calabria Region, where some architectural heritages have been properly selected as case studies for monitoring purposes. *The present work is supported and funded by Ministero dell'Università, dell'Istruzione e della Ricerca (MIUR) under the research project PON01-02710 "MASSIMO" - "Monitoraggio in Area Sismica di Sistemi Monumentali".
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.
2014-12-01
In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.
Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup
NASA Astrophysics Data System (ADS)
Pathak, Rohit; Joshi, Satyadhar
In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.
A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.
NASA Astrophysics Data System (ADS)
Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio
2016-04-01
The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different stages of the multi-risk assessment process (i.e. identification of objectives, collection of data, definition of risk thresholds and indicators). The results of the assessment will allow the development of multi-risk scenarios enabling the evaluation and prioritization of risk management and adaptation options under changing climate conditions.
Xray: N-dimensional, labeled arrays for analyzing physical datasets in Python
NASA Astrophysics Data System (ADS)
Hoyer, S.
2015-12-01
Efficient analysis of geophysical datasets requires tools that both preserve and utilize metadata, and that transparently scale to process large datas. Xray is such a tool, in the form of an open source Python library for analyzing the labeled, multi-dimensional array (tensor) datasets that are ubiquitous in the Earth sciences. Xray's approach pairs Python data structures based on the data model of the netCDF file format with the proven design and user interface of pandas, the popular Python data analysis library for labeled tabular data. On top of the NumPy array, xray adds labeled dimensions (e.g., "time") and coordinate values (e.g., "2015-04-10"), which it uses to enable a host of operations powered by these labels: selection, aggregation, alignment, broadcasting, split-apply-combine, interoperability with pandas and serialization to netCDF/HDF5. Many of these operations are enabled by xray's tight integration with pandas. Finally, to allow for easy parallelism and to enable its labeled data operations to scale to datasets that does not fit into memory, xray integrates with the parallel processing library dask.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Silvestri, M.; Musacchio, M.
2017-12-01
In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).
Balkányi, László
2002-01-01
To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.
Sanders, Matthew R.; Kirby, James N.
2013-01-01
A consumer perspective can contribute much to enhancing the “ecological fit” of population level parenting interventions so they meet the needs of parents. This approach involves building relationships with consumer groups and soliciting consumer input into the relevance and acceptability of interventions, clarifying the enablers and barriers to engagement and involvement of parents, and clarifying variables that influence a parent’s program completion. The adoption of a more collaborative approach to working with consumers is important if meaningful population level change in the prevalence of serious social, emotional and behavioral problems in children and young people is to be achieved. Parents seeking assistance for their children’s behavior come from a diverse range of socioeconomic backgrounds, educational levels, cultures and languages. This paper examines consumer engagement strategies that can be employed throughout the process of program development, evaluation, training and dissemination and in “scaling up” the intervention. We argue that a multi-level public health approach to parenting intervention requires a strong consumer perspective to enable interventions to be more responsive to the preferences and needs of families and to ensure improved population reach of interventions. Examples from large scale dissemination trials are used to illustrate how consumer input can result in an increasingly differentiated suite of evidence-based parenting programs. PMID:22440062
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation
Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan
2010-01-01
Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939
A Multi-Faceted Formative Assessment Approach: Better Recognising the Learning Needs of Students
ERIC Educational Resources Information Center
Jenkins, James O.
2010-01-01
Students are increasingly subject to a series of learning pressures that prevent effective engagement in assessment. Thus, the aim of this study was to create a multi-faceted formative assessment approach that better enabled students to engage in the assessment process. A formative assessment approach, consisting of six key initiatives, is…
Towards Simpler Custom and OpenSearch Services for Voluminous NEWS Merged A-Train Data (Invited)
NASA Astrophysics Data System (ADS)
Hua, H.; Fetzer, E.; Braverman, A. J.; Lewis, S.; Henderson, M. L.; Guillaume, A.; Lee, S.; de La Torre Juarez, M.; Dang, H. T.
2010-12-01
To simplify access to large and complex satellite data sets for climate analysis and model verification, we developed web services that is used to study long-term and global-scale trends in climate, water and energy cycle, and weather variability. A related NASA Energy and Water Cycle Study (NEWS) task has created a merged NEWS Level 2 data from multiple instruments in NASA’s A-Train constellation of satellites. We used this data to enable creation of climatologies that include correlation between observed temperature, water vapor and cloud properties from the A-Train sensors. Instead of imposing on the user an often rigid and limiting web-based analysis environment, we recognize the need for simple and well-designed services so that users can perform analysis in their own familiar computing environments. Custom on-demand services were developed to improve data accessibility of voluminous multi-sensor data. Services enabling geospatial, geographical, and multi-sensor parameter subsets of the data, as well a custom time-averaged Level 3 service will be presented. We will also show how a Level 3Q data reduction approach can be used to help “browse” the voluminous multi-sensor Level 2 data. An OpenSearch capability with full text + space + time search of data products will also be presented as an approach to facilitated interoperability with other data systems. We will present our experiences for improving user usability as well as strategies for facilitating interoperability with other data systems.
The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems
ERIC Educational Resources Information Center
Diamanti, Eirini Ilana
2012-01-01
Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…
Multi-dimensional photonic states from a quantum dot
NASA Astrophysics Data System (ADS)
Lee, J. P.; Bennett, A. J.; Stevenson, R. M.; Ellis, D. J. P.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2018-04-01
Quantum states superposed across multiple particles or degrees of freedom offer an advantage in the development of quantum technologies. Creating these states deterministically and with high efficiency is an ongoing challenge. A promising approach is the repeated excitation of multi-level quantum emitters, which have been shown to naturally generate light with quantum statistics. Here we describe how to create one class of higher dimensional quantum state, a so called W-state, which is superposed across multiple time bins. We do this by repeated Raman scattering of photons from a charged quantum dot in a pillar microcavity. We show this method can be scaled to larger dimensions with no reduction in coherence or single-photon character. We explain how to extend this work to enable the deterministic creation of arbitrary time-bin encoded qudits.
A new multi-scale method to reveal hierarchical modular structures in biological networks.
Jiao, Qing-Ju; Huang, Yan; Shen, Hong-Bin
2016-11-15
Biological networks are effective tools for studying molecular interactions. Modular structure, in which genes or proteins may tend to be associated with functional modules or protein complexes, is a remarkable feature of biological networks. Mining modular structure from biological networks enables us to focus on a set of potentially important nodes, which provides a reliable guide to future biological experiments. The first fundamental challenge in mining modular structure from biological networks is that the quality of the observed network data is usually low owing to noise and incompleteness in the obtained networks. The second problem that poses a challenge to existing approaches to the mining of modular structure is that the organization of both functional modules and protein complexes in networks is far more complicated than was ever thought. For instance, the sizes of different modules vary considerably from each other and they often form multi-scale hierarchical structures. To solve these problems, we propose a new multi-scale protocol for mining modular structure (named ISIMB) driven by a node similarity metric, which works in an iteratively converged space to reduce the effects of the low data quality of the observed network data. The multi-scale node similarity metric couples both the local and the global topology of the network with a resolution regulator. By varying this resolution regulator to give different weightings to the local and global terms in the metric, the ISIMB method is able to fit the shape of modules and to detect them on different scales. Experiments on protein-protein interaction and genetic interaction networks show that our method can not only mine functional modules and protein complexes successfully, but can also predict functional modules from specific to general and reveal the hierarchical organization of protein complexes.
Fritz London and the scale of quantum mechanisms
NASA Astrophysics Data System (ADS)
Monaldi, Daniela
2017-11-01
Fritz London's seminal idea of ;quantum mechanisms of macroscopic scale;, first articulated in 1946, was the unanticipated result of two decades of research, during which London pursued quantum-mechanical explanations of various kinds of systems of particles at different scales. He started at the microphysical scale with the hydrogen molecule, generalized his approach to chemical bonds and intermolecular forces, then turned to macrophysical systems like superconductors and superfluid helium. Along this path, he formulated a set of concepts-the quantum mechanism of exchange, the rigidity of the wave function, the role of quantum statistics in multi-particle systems, the possibility of order in momentum space-that eventually coalesced into a new conception of systems of equal particles. In particular, it was London's clarification of Bose-Einstein condensation that enabled him to formulate the notion of superfluids, and led him to the recognition that quantum mechanics was not, as it was commonly assumed, relevant exclusively as a micromechanics.
Rotation, scale, and translation invariant pattern recognition using feature extraction
NASA Astrophysics Data System (ADS)
Prevost, Donald; Doucet, Michel; Bergeron, Alain; Veilleux, Luc; Chevrette, Paul C.; Gingras, Denis J.
1997-03-01
A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.
Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B
2018-03-05
Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
NASA Astrophysics Data System (ADS)
Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.
2015-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
NASA Astrophysics Data System (ADS)
Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.
2014-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
Multi-scale modelling of rubber-like materials and soft tissues: an appraisal
Puglisi, G.
2016-01-01
We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai
2015-12-01
In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.
Shared worlds: multi-sited ethnography and nursing research.
Molloy, Luke; Walker, Kim; Lakeman, Richard
2017-03-22
Background Ethnography, originally developed for the study of supposedly small-scale societies, is now faced with an increasingly mobile, changing and globalised world. Cultural identities can exist without reference to a specific location and extend beyond regional and national boundaries. It is therefore no longer imperative that the sole object of the ethnographer's practice should be a geographically bounded site. Aim To present a critical methodological review of multi-sited ethnography. Discussion Understanding that it can no longer be taken with any certainty that location alone determines culture, multi-sited ethnography provides a method of contextualising multi-sited social phenomena. The method enables researchers to examine social phenomena that are simultaneously produced in different locations. It has been used to undertake cultural analysis of diverse areas such as organ trafficking, global organisations, technologies and anorexia. Conclusion The authors contend that multi-sited ethnography is particularly suited to nursing research as it provides researchers with an ethnographic method that is more relevant to the interconnected world of health and healthcare services. Implications for practice Multi-sited ethnography provides nurse researchers with an approach to cultural analysis in areas such as the social determinants of health, healthcare services and the effects of health policies across multiple locations.
Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...
2017-04-24
By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Mayo, Jackson
By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2014-12-01
Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.
Sablah, Mawuli; Baker, Shawn K; Badham, Jane; De Zayas, Alfred
2013-11-01
The scaling up nutrition (SUN) policy framework requires extensive public–private partnership (PPP). Malnutrition is multi-dimensional and should engage multi-sectoral platforms. The SUN policy however did not fully embrace the dynamics of harnessing PPP. The objectives of the present paper are to highlight the reasons for the apprehension around PPP and illustrate how effective coordination of PPP in West Africa has contributed to implementing large-scale food fortification with micronutrients as a complementary nutrition intervention. The experience of Helen Keller International (HKI) in scaling up food fortification was emphasised with understanding of the factors contributing to indifference by the international community to private sector contribution to SUN. The roles of different stakeholders in a PPP are elucidated and the process linked to who, why and how to engage. The private sector provides direct nutrition services while the public sector creates the enabling environment for the private sector to thrive on social values. Through this approach fortified vegetable oil and wheat flour are now reaching over 70% of the population in West Africa. As a neutral broker HKI coordinated and facilitated dialogue among the different stakeholders. The core competencies of each stakeholder were harnessed and each partner was held accountable. It concludes that multi-sectoral relationship must be transparent, equitable and based on shared mutual interests. The rules and values of PPP offer opportunities for SUN.
Modularization of gradient-index optical design using wavefront matching enabled optimization.
Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H
2016-05-02
This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Modelling and Dosimetry for Alpha-Particle Therapy
Sgouros, George; Hobbs, Robert F.; Song, Hong
2015-01-01
As a consequence of the high potency and short range of alpha-particles, radiopharmaceutical therapy with alpha-particle emitting radionuclides is a promising treatment approach that is under active pre-clinical and clinical investigation. To understand and predict the biological effects of alpha-particle radiopharmaceuticals, dosimetry is required at the micro or multi-cellular scale level. At such a scale, highly non-uniform irradiation of the target volume may be expected and the utility of a single absorbed dose value to predict biological effects comes into question. It is not currently possible to measure the pharmacokinetic input required for micro scale dosimetry in humans. Accordingly, pre-clinical studies are required to provide the pharmacokinetic data for dosimetry calculations. The translation of animal data to the human requires a pharmacokinetic model that links macro- and micro-scale pharmacokinetics thereby enabling the extrapolation of micro-scale kinetics from macroscopic measurements. These considerations along with a discussion of the appropriate physical quantity and related units for alpha-particle radiopharmaceutical therapy are examined in this review. PMID:22201712
Multi-scale multi-point observation of dipolarization in the near-Earth's magnetotail
NASA Astrophysics Data System (ADS)
Nakamura, R.; Varsani, A.; Genestreti, K.; Nakamura, T.; Baumjohann, W.; Birn, J.; Le Contel, O.; Nagai, T.
2017-12-01
We report on evolution of the dipolarization in the near-Earth plasma sheet during two intense substorms based on observations when the four spacecraft of the Magnetospheric Multiscale (MMS) together with GOES and Geotail were located in the near Earth magnetotail. These multiple spacecraft together with the ground-based magnetogram enabled to obtain the location of the large- scale substorm current wedge (SCW) and overall changes in the plasma sheet configuration. MMS was located in the southern hemisphere at the outer plasma sheet and observed fast flow disturbances associated with dipolarizations. The high time-resolution measurements from MMS enable us to detect the rapid motion of the field structures and the flow disturbances separately and to resolve signatures below the ion-scales. We found small-scale transient field-aligned current sheets associated with upward streaming cold plasmas and Hall-current layers in the fast flow shear region. Observations of these current structures are compared with simulations of reconnection jets.
Calibration of a distributed hydrologic model for six European catchments using remote sensing data
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.
2017-12-01
While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.
Brimblecombe, J; Bailie, R; van den Boogaard, C; Wood, B; Liberato, S C; Ferguson, M; Coveney, J; Jaenke, R; Ritchie, J
2017-12-01
Food insecurity underlies and compounds many of the development issues faced by remote Indigenous communities in Australia. Multi-sector approaches offer promise to improve food security. We assessed the feasibility of a novel multi-sector approach to enhance community food security in remote Indigenous Australia. A longitudinal comparative multi-site case study, the Good Food Systems Good Food for All Project, was conducted (2009-2013) with four Aboriginal communities. Continuous improvement meetings were held in each community. Data from project documents and store sales were used to assess feasibility according to engagement, uptake and sustainability of action, and impact on community diet, as well as identifying conditions facilitating or hindering these. Engagement was established where: the community perceived a need for the approach; where trust was developed between the community and facilitators; where there was community stability; and where flexibility was applied in the timing of meetings. The approach enabled stakeholders in each community to collectively appraise the community food system and plan action. Actions that could be directly implemented within available resources resulted from developing collaborative capacity. Actions requiring advocacy, multi-sectoral involvement, commitment or further resources were less frequently used. Positive shifts in community diet were associated with key areas where actions were implemented. A multi-sector participatory approach seeking continuous improvement engaged committed Aboriginal and non-Aboriginal stakeholders and was shown to have potential to shift community diet. Provision of clear mechanisms to link this approach with higher level policy and decision-making structures, clarity of roles and responsibilities, and processes to prioritise and communicate actions across sectors should further strengthen capacity for food security improvement. Integrating this approach enabling local decision-making into community governance structures with adequate resourcing is an imperative.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Processing Approaches for DAS-Enabled Continuous Seismic Monitoring
NASA Astrophysics Data System (ADS)
Dou, S.; Wood, T.; Freifeld, B. M.; Robertson, M.; McDonald, S.; Pevzner, R.; Lindsey, N.; Gelvin, A.; Saari, S.; Morales, A.; Ekblaw, I.; Wagner, A. M.; Ulrich, C.; Daley, T. M.; Ajo Franklin, J. B.
2017-12-01
Distributed Acoustic Sensing (DAS) is creating a "field as laboratory" capability for seismic monitoring of subsurface changes. By providing unprecedented spatial and temporal sampling at a relatively low cost, DAS enables field-scale seismic monitoring to have durations and temporal resolutions that are comparable to those of laboratory experiments. Here we report on seismic processing approaches developed during data analyses of three case studies all using DAS-enabled seismic monitoring with applications ranging from shallow permafrost to deep reservoirs: (1) 10-hour downhole monitoring of cement curing at Otway, Australia; (2) 2-month surface monitoring of controlled permafrost thaw at Fairbanks, Alaska; (3) multi-month downhole and surface monitoring of carbon sequestration at Decatur, Illinois. We emphasize the data management and processing components relevant to DAS-based seismic monitoring, which include scalable approaches to data management, pre-processing, denoising, filtering, and wavefield decomposition. DAS has dramatically increased the data volume to the extent that terabyte-per-day data loads are now typical, straining conventional approaches to data storage and processing. To achieve more efficient use of disk space and network bandwidth, we explore improved file structures and data compression schemes. Because noise floor of DAS measurements is higher than that of conventional sensors, optimal processing workflow involving advanced denoising, deconvolution (of the source signatures), and stacking approaches are being established to maximize signal content of DAS data. The resulting workflow of data management and processing could accelerate the broader adaption of DAS for continuous monitoring of critical processes.
Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach
NASA Technical Reports Server (NTRS)
Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.
2003-01-01
"Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.
Multi-Scale Sizing of Lightweight Multifunctional Spacecraft Structural Components
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.
2005-01-01
This document is the final report for the project entitled, "Multi-Scale Sizing of Lightweight Multifunctional Spacecraft Structural Components," funded under the NRA entitled "Cross-Enterprise Technology Development Program" issued by the NASA Office of Space Science in 2000. The project was funded in 2001, and spanned a four year period from March, 2001 to February, 2005. Through enhancements to and synthesis of unique, state of the art structural mechanics and micromechanics analysis software, a new multi-scale tool has been developed that enables design, analysis, and sizing of advance lightweight composite and smart materials and structures from the full vehicle, to the stiffened structure, to the micro (fiber and matrix) scales. The new software tool has broad, cross-cutting value to current and future NASA missions that will rely on advanced composite and smart materials and structures.
Bayesian approach for three-dimensional aquifer characterization at the Hanford 300 Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murakami, Haruko; Chen, X.; Hahn, Melanie S.
2010-10-21
This study presents a stochastic, three-dimensional characterization of a heterogeneous hydraulic conductivity field within DOE's Hanford 300 Area site, Washington, by assimilating large-scale, constant-rate injection test data with small-scale, three-dimensional electromagnetic borehole flowmeter (EBF) measurement data. We first inverted the injection test data to estimate the transmissivity field, using zeroth-order temporal moments of pressure buildup curves. We applied a newly developed Bayesian geostatistical inversion framework, the method of anchored distributions (MAD), to obtain a joint posterior distribution of geostatistical parameters and local log-transmissivities at multiple locations. The unique aspects of MAD that make it suitable for this purpose are itsmore » ability to integrate multi-scale, multi-type data within a Bayesian framework and to compute a nonparametric posterior distribution. After we combined the distribution of transmissivities with depth-discrete relative-conductivity profile from EBF data, we inferred the three-dimensional geostatistical parameters of the log-conductivity field, using the Bayesian model-based geostatistics. Such consistent use of the Bayesian approach throughout the procedure enabled us to systematically incorporate data uncertainty into the final posterior distribution. The method was tested in a synthetic study and validated using the actual data that was not part of the estimation. Results showed broader and skewed posterior distributions of geostatistical parameters except for the mean, which suggests the importance of inferring the entire distribution to quantify the parameter uncertainty.« less
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems
Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2014-01-01
Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.
Using Evaluation Research as a Means for Policy Analysis in a "New" Mission-Oriented Policy Context
ERIC Educational Resources Information Center
Amanatidou, Effie; Cunningham, Paul; Gök, Abdullah; Garefi, Ioanna
2014-01-01
Grand challenges stress the importance of multi-disciplinary research, a multi-actor approach in examining the current state of affairs and exploring possible solutions, multi-level governance and policy coordination across geographical boundaries and policy areas, and a policy environment for enabling change both in science and technology and in…
A Multidisciplinary Approach to Assessing the Causal Components of Climate Change
NASA Astrophysics Data System (ADS)
Gosnold, W. D.; Todhunter, P. E.; Dong, X.; Rundquist, B.; Majorowicz, J.; Blackwell, D. D.
2004-05-01
Separation of climate forcing by anthropogenic greenhouse gases from natural radiative climate forcing is difficult because the composite temperature signal in the meteorological and multi-proxy temperature records cannot be resolved directly into radiative forcing components. To address this problem, we have initiated a large-scale, multidisciplinary project to test coherence between ground surface temperatures (GST) reconstructed from borehole T-z profiles, surface air temperatures (SAT), soil temperatures, and solar radiation. Our hypothesis is that radiative heating and heat exchange between the ground and the air directly control the ground surface temperature. Consequently, borehole T-z measurements at multi-year intervals spanning time periods when solar radiation, soil and air temperatures have been recorded should enable comparison of the thermal energy stored in the ground to these quantities. If coherence between energy storage, solar radiation, GST, SAT and multi-proxy temperature data can be discerned for a one or two decade scale, synthesis of GST and multi-proxy data over the past several centuries may enable us to separately determine the anthropogenic and natural forcings of climate change. The data we are acquiring include: (1) New T-z measurements in boreholes previously used in paleoclimate and heat flow research in Canada and the United States from the 1970's to the present. (2) Meteorological data from the US Historical Climatology Network and the Automated Weather Data Network of the High Plains Regional Climate Center, and Environment Canada. (3) Direct and remotely sensed data on land use, environment, and soil properties at selected borehole and meteorological sites for the periods between borehole observations. The project addresses three related questions: What is the coherence between the GST, SAT, soil temperatures and solar radiation? Have microclimate changes at borehole sites and climate stations affected temperature trends? If good coherence is obtained, can the coherence between thermal energy stored in the ground and radiative forcing during the time between T-z measurements be extended several centuries into the past?
NASA Astrophysics Data System (ADS)
Brendryen, J.; Hannisdal, B.; Haaga, K. A.; Haflidason, H.; Castro, D. D.; Grasmo, K. J.; Sejrup, H. P.; Edwards, R. L.; Cheng, H.; Kelly, M. J.; Lu, Y.
2016-12-01
Abrupt millennial scale climatic events known as Dansgaard-Oeschger events are a defining feature of the Quaternary climate system dynamics in the North Atlantic and beyond. We present a high-resolution multi-proxy record of ocean-ice sheet interactions in the Norwegian Sea spanning the interval between 50 and 150 ka BP. A comparison with low latitude records indicates a very close connection between the high northern latitude ocean-ice sheet interactions and large scale changes in low latitude atmospheric circulation and hydrology even on sub-millennial scales. The records are placed on a common precise radiometric chronology based on correlations to U/Th dated speleothem records from China and the Alps. This enables a comparison of the records to orbital and other climatically important parameters such as U/Th dated sea-level data from corals and speleothems. We explore the drive-response relationships in these coupled systems with the information transfer (IT) and the convergent cross mapping (CCM) analytical techniques. These methods employ conceptually different approaches to detect the relative strength and directionality of potentially chaotic and nonlinearly coupled systems. IT is a non-parametric measure of information transfer between data records based on transfer entropy, while CCM relies on delay reconstructions using Takens' theorem. This approach enables us to address how the climate system processes interact and how this interaction is affected by external forcing from for example greenhouse gases and orbital variability.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Multi-paradigm simulation at nanoscale: Methodology and application to functional carbon material
NASA Astrophysics Data System (ADS)
Su, Haibin
2012-12-01
Multiparadigm methods to span the scales from quantum mechanics to practical issues of functional nanoassembly and nanofabrication are enabling first principles predictions to guide and complement the experimental developments by designing and optimizing computationally the materials compositions and structures to assemble nanoscale systems with the requisite properties. In this talk, we employ multi-paradigm approaches to investigate functional carbon materials with versatile character, including fullerene, carbon nanotube (CNT), graphene, and related hybrid structures, which have already created an enormous impact on next generation nano devices. The topics will cover the reaction dynamics of C60 dimerization and the more challenging complex tubular fullerene formation process in the peapod structures; the computational design of a new generation of peapod nano-oscillators, the predicted magnetic state in Nano Buds; opto-electronic properties of graphene nanoribbons; and disorder / vibronic effects on transport in carbonrich materials.
NASA Astrophysics Data System (ADS)
Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam
2018-05-01
Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.
NASA Astrophysics Data System (ADS)
Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam
2018-01-01
Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.
Contaminant source and release history identification in groundwater: A multi-step approach
NASA Astrophysics Data System (ADS)
Gzyl, G.; Zanini, A.; Frączek, R.; Kura, K.
2014-02-01
The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study.
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
de Vargas Roditi, Laura; Claassen, Manfred
2015-08-01
Novel technological developments enable single cell population profiling with respect to their spatial and molecular setup. These include single cell sequencing, flow cytometry and multiparametric imaging approaches and open unprecedented possibilities to learn about the heterogeneity, dynamics and interplay of the different cell types which constitute tissues and multicellular organisms. Statistical and dynamic systems theory approaches have been applied to quantitatively describe a variety of cellular processes, such as transcription and cell signaling. Machine learning approaches have been developed to define cell types, their mutual relationships, and differentiation hierarchies shaping heterogeneous cell populations, yielding insights into topics such as, for example, immune cell differentiation and tumor cell type composition. This combination of experimental and computational advances has opened perspectives towards learning predictive multi-scale models of heterogeneous cell populations. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sabeur, Zoheir; Chakravarthy, Ajay; Bashevoy, Maxim; Modafferi, Stefano
2013-04-01
The rapid increase in environmental observations which are conducted by Small to Medium Enterprise communities and volunteers using affordable in situ sensors at various scales, in addition to the more established observatories set up by environmental and space agencies using airborne and space-borne sensing technologies is generating serious amounts of BIG data at ever increasing speeds. Furthermore, the emergence of Future Internet technologies and the urgent requirements for the deployment of specific enablers for the delivery of processed environmental knowledge in real-time with advanced situation awareness to citizens has reached paramount importance. Specifically, it has become highly critical now to build and provide services which automate the aggregation of data from various sources, while surmounting the semantic gaps, conflicts and heterogeneity in data sources. The early stage aggregation of data will enable the pre-processing of data from multiple sources while reconciling the temporal gaps in measurement time series, and aligning their respective a-synchronicities. This low level type of data fusion process needs to be automated and chained to more advanced level of data fusion services specialising in observation forecasts at spaces where sensing is not deployed; or at time slices where sensing has not taken place yet. As a result, multi-level fusion services are required among the families of specific enablers for monitoring environments and spaces in the Future Internet. These have been intially deployed and piloted in the ongoing ENVIROFI project of the FI-PPP programme [1]. Automated fusion and modelling of in situ and remote sensing data has been set up and the experimentation successfully conducted using RBF networks for the spatial fusion of water quality parameters measurements from satellite and stationary buoys in the Irish Sea. The RBF networks method scales for the spatial data fusion of multiple types of observation sources. This important approach provides a strong basis for the delivery of environmental observations at desired spatial and temporal scales to multiple users with various needs of spatial and temporal resolutions. It has also led to building robust future internet specific enablers on data fusion, which can indeed be used for multiple usage areas above and beyond the environmental domains of the Future Internet. In this paper, data and processing workflow scenarios shall be described. The fucntionalities of the multi-level fusion services shall be demonstrated and made accessible to the wider communities of the Fututre Internet. [1] The Environmental Observation Web and its Service Applications within the Future Internet. ENVIROFI IP. FP7-2011-ICT-IF Pr.No: 284898 http://www.envirofi.eu/
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Satellite Imagery Analysis for Automated Global Food Security Forecasting
NASA Astrophysics Data System (ADS)
Moody, D.; Brumby, S. P.; Chartrand, R.; Keisler, R.; Mathis, M.; Beneke, C. M.; Nicholaeff, D.; Skillman, S.; Warren, M. S.; Poehnelt, J.
2017-12-01
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.
Fusion of multi-source remote sensing data for agriculture monitoring tasks
NASA Astrophysics Data System (ADS)
Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.
2016-12-01
Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.
Multi-scale modeling of multi-component reactive transport in geothermal aquifers
NASA Astrophysics Data System (ADS)
Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David
2014-05-01
In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
USDA-ARS?s Scientific Manuscript database
The important questions about agriculture, climate, and sustainability have become increasingly complex and require a coordinated, multi-faceted approach for developing new knowledge and understanding. A multi-state, transdisciplinary project was begun in 2011 to study the potential for both mitigat...
Wu, Chi; Xie, Zuowei; Zhang, Guangzhao; Zi, Guofu; Tu, Yingfeng; Yang, Yali; Cai, Ping; Nie, Ting
2002-12-07
A combination of polymer physics and synthetic chemistry has enabled us to develop self-assembly assisted polymerization (SAAP), leading to the preparation of long multi-block copolymers with an ordered chain sequence and controllable block lengths.
Impact of materials engineering on edge placement error (Conference Presentation)
NASA Astrophysics Data System (ADS)
Freed, Regina; Mitra, Uday; Zhang, Ying
2017-04-01
Transistor scaling has transitioned from wavelength scaling to multi-patterning techniques, due to the resolution limits of immersion of immersion lithography. Deposition and etch have enabled scaling in the by means of SADP and SAQP. Spacer based patterning enables extremely small linewidths, sufficient for several future generations of transistors. However, aligning layers in Z-direction, as well as aligning cut and via patterning layers, is becoming a road-block due to global and local feature variation and fidelity. This presentation will highlight the impact of deposition and etch on this feature alignment (EPE) and illustrate potential paths toward lowering EPE using material engineering.
Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.
Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah
2009-01-01
Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.
NASA Astrophysics Data System (ADS)
Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming
2014-05-01
A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.
NASA Astrophysics Data System (ADS)
Rusu, Mirabela; Wang, Haibo; Golden, Thea; Gow, Andrew; Madabhushi, Anant
2013-03-01
Mouse lung models facilitate the investigation of conditions such as chronic inflammation which are associated with common lung diseases. The multi-scale manifestation of lung inflammation prompted us to use multi-scale imaging - both in vivo, ex vivo MRI along with ex vivo histology, for its study in a new quantitative way. Some imaging modalities, such as MRI, are non-invasive and capture macroscopic features of the pathology, while others, e.g. ex vivo histology, depict detailed structures. Registering such multi-modal data to the same spatial coordinates will allow the construction of a comprehensive 3D model to enable the multi-scale study of diseases. Moreover, it may facilitate the identification and definition of quantitative of in vivo imaging signatures for diseases and pathologic processes. We introduce a quantitative, image analytic framework to integrate in vivo MR images of the entire mouse with ex vivo histology of the lung alone, using lung ex vivo MRI as conduit to facilitate their co-registration. In our framework, we first align the MR images by registering the in vivo and ex vivo MRI of the lung using an interactive rigid registration approach. Then we reconstruct the 3D volume of the ex vivo histological specimen by efficient group wise registration of the 2D slices. The resulting 3D histologic volume is subsequently registered to the MRI volumes by interactive rigid registration, directly to the ex vivo MRI, and implicitly to in vivo MRI. Qualitative evaluation of the registration framework was performed by comparing airway tree structures in ex vivo MRI and ex vivo histology where airways are visible and may be annotated. We present a use case for evaluation of our co-registration framework in the context of studying chronic inammation in a diseased mouse.
Asteroid Redirect Mission Concept: A Bold Approach for Utilizing Space Resources
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Merrill, Raymond G.; Brophy, John R.; Mueller, Robert P.
2014-01-01
The utilization of natural resources from asteroids is an idea that is older than the Space Age. The technologies are now available to transform this endeavour from an idea into reality. The Asteroid Redirect Mission (ARM) is a mission concept which includes the goal of robotically returning a small Near-Earth Asteroid (NEA) or a multi-ton boulder from a large NEA to cislunar space in the mid 2020's using an advanced Solar Electric Propulsion (SEP) vehicle and currently available technologies. The paradigm shift enabled by the ARM concept would allow in-situ resource utilization (ISRU) to be used at the human mission departure location (i.e., cislunar space) versus exclusively at the deep-space mission destination. This approach drastically reduces the barriers associated with utilizing ISRU for human deep-space missions. The successful testing of ISRU techniques and associated equipment could enable large-scale commercial ISRU operations to become a reality and enable a future space-based economy utilizing processed asteroidal materials. This paper provides an overview of the ARM concept and discusses the mission objectives, key technologies, and capabilities associated with the mission, as well as how the ARM and associated operations would benefit humanity's quest for the exploration and settlement of space.
Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds
NASA Astrophysics Data System (ADS)
Samuels, A.; Babbar-Sebens, M.
2012-12-01
Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.
2016-12-01
Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.
Bordbar, Aarash; Palsson, Bernhard O.
2016-01-01
Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein’s structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism. PMID:27467583
Mih, Nathan; Brunk, Elizabeth; Bordbar, Aarash; Palsson, Bernhard O
2016-07-01
Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein's structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism.
Multi-Attribute Consensus Building Tool
ERIC Educational Resources Information Center
Shyyan, Vitaliy; Christensen, Laurene; Thurlow, Martha; Lazarus, Sheryl
2013-01-01
The Multi-Attribute Consensus Building (MACB) method is a quantitative approach for determining a group's opinion about the importance of each item (strategy, decision, recommendation, policy, priority, etc.) on a list (Vanderwood, & Erickson, 1994). This process enables a small or large group of participants to generate and discuss a set…
75 FR 20085 - Subpart B-Advanced Biofuel Payment Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... biofuels industry is very capital intensive, the Agency is proposing multi-year contracts to enable advanced biofuels producers the assurance of a multi-year revenue stream. This approach is consistent with the goal of creating a stable industry. Finally, the Agency is proposing a two- tiered payment...
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
Gas-Liquid Processing in Microchannels
DOE Office of Scientific and Technical Information (OSTI.GOV)
TeGrotenhuis, Ward E.; Stenkamp, Victoria S.; Twitchell, Alvin
Processing gases and liquids together in microchannels having at least one dimension <1 mm has unique advantages for rapid heat and mass transfer. One approach for managing the two phases is to use porous structures as wicks within microchannels to segregate the liquid phase from the gas phase. Gas-liquid processing is accomplished by providing a gas flow path and inducing flow of the liquid phase through or along the wick under an induced pressure gradient. A variety of unit operations are enabled, including phase separation, partial condensation, absorption, desorption, and distillation. Results are reported of an investigation of microchannel phasemore » separation in a transparent, single-channel device. Next, heat exchange is integrated with the microchannel wick approach to create a partial condenser that also separates the condensate. Finally, the scale-up to a multi-channel phase separator is described.« less
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
Bernstein, Jacob G.; Allen, Brian D.; Guerra, Alexander A.; Boyden, Edward S.
2016-01-01
Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology. PMID:26798482
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Plant trait detection with multi-scale spectrometry
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.
2017-12-01
Proximal and remote sensing using imaging spectrometry offers new opportunities for detecting plant traits, with benefits for phenotyping, productivity estimation, stress detection, and biodiversity studies. Using proximal and airborne spectrometry, we evaluated variation in plant optical properties at various spatial and spectral scales with the goal of identifying optimal scales for distinguishing plant traits related to photosynthetic function. Using directed approaches based on physiological vegetation indices, and statistical approaches based on spectral information content, we explored alternate ways of distinguishing plant traits with imaging spectrometry. With both leaf traits and canopy structure contributing to the signals, results exhibit a strong scale dependence. Our results demonstrate the benefits of multi-scale experimental approaches within a clear conceptual framework when applying remote sensing methods to plant trait detection for phenotyping, productivity, and biodiversity studies.
NASA Astrophysics Data System (ADS)
Lee, Chang-Chun; Huang, Pei-Chen
2018-05-01
The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.
Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio
2018-06-19
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.
Gustafsson, Lena; Perhans, Karin
2010-12-01
A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.
NASA Astrophysics Data System (ADS)
Candy, Adam S.; Pietrzak, Julie D.
2018-01-01
The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.
Yan, Zheng; Zhang, Fan; Wang, Jiechen; Liu, Fei; Guo, Xuelin; Nan, Kewang; Lin, Qing; Gao, Mingye; Xiao, Dongqing; Shi, Yan; Qiu, Yitao; Luan, Haiwen; Kim, Jung Hwan; Wang, Yiqi; Luo, Hongying; Han, Mengdi; Huang, Yonggang; Zhang, Yihui; Rogers, John A
2016-04-25
Origami is a topic of rapidly growing interest in both the scientific and engineering research communities due to its promising potential in a broad range of applications. Previous assembly approaches of origami structures at the micro/nanoscale are constrained by the applicable classes of materials, topologies and/or capability of control over the transformation. Here, we introduce an approach that exploits controlled mechanical buckling for autonomic origami assembly of 3D structures across material classes from soft polymers to brittle inorganic semiconductors, and length scales from nanometers to centimeters. This approach relies on a spatial variation of thickness in the initial 2D structures as an effective strategy to produce engineered folding creases during the compressive buckling process. The elastic nature of the assembly scheme enables active, deterministic control over intermediate states in the 2D to 3D transformation in a continuous and reversible manner. Demonstrations include a broad set of 3D structures formed through unidirectional, bidirectional, and even hierarchical folding, with examples ranging from half cylindrical columns and fish scales, to cubic boxes, pyramids, starfish, paper fans, skew tooth structures, and to amusing system-level examples of soccer balls, model houses, cars, and multi-floor textured buildings.
Three-Component Reaction Discovery Enabled by Mass Spectrometry of Self-Assembled Monolayers
Montavon, Timothy J.; Li, Jing; Cabrera-Pardo, Jaime R.; Mrksich, Milan; Kozmin, Sergey A.
2011-01-01
Multi-component reactions have been extensively employed in many areas of organic chemistry. Despite significant progress, the discovery of such enabling transformations remains challenging. Here, we present the development of a parallel, label-free reaction-discovery platform, which can be used for identification of new multi-component transformations. Our approach is based on the parallel mass spectrometric screening of interfacial chemical reactions on arrays of self-assembled monolayers. This strategy enabled the identification of a simple organic phosphine that can catalyze a previously unknown condensation of siloxy alkynes, aldehydes and amines to produce 3-hydroxy amides with high efficiency and diastereoselectivity. The reaction was further optimized using solution phase methods. PMID:22169871
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
Flowthrough Reductive Catalytic Fractionation of Biomass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Eric M.; Stone, Michael L.; Katahira, Rui
2017-11-01
Reductive catalytic fractionation (RCF) has emerged as a leading biomass fractionation and lignin valorization strategy. Here, flowthrough reactors were used to investigate RCF of poplar. Most RCF studies to date have been conducted in batch, but a flow-based process enables the acquisition of intrinsic kinetic and mechanistic data essential to accelerate the design, optimization, and scale-up of RCF processes. Time-resolved product distributions and yields obtained from experiments with different catalyst loadings were used to identify and deconvolute events during solvolysis and hydrogenolysis. Multi-bed RCF experiments provided unique insights into catalyst deactivation, showing that leaching, sintering, and surface poisoning are causesmore » for decreased catalyst performance. The onset of catalyst deactivation resulted in higher concentrations of unsaturated lignin intermediates and increased occurrence of repolymerization reactions, producing high-molecular-weight species. Overall, this study demonstrates the concept of flowthrough RCF, which will be vital for realistic scale-up of this promising approach.« less
Bio-inspired Murray materials for mass transfer and activity
NASA Astrophysics Data System (ADS)
Zheng, Xianfeng; Shen, Guofang; Wang, Chao; Li, Yu; Dunphy, Darren; Hasan, Tawfique; Brinker, C. Jeffrey; Su, Bao-Lian
2017-04-01
Both plants and animals possess analogous tissues containing hierarchical networks of pores, with pore size ratios that have evolved to maximize mass transport and rates of reactions. The underlying physical principles of this optimized hierarchical design are embodied in Murray's law. However, we are yet to realize the benefit of mimicking nature's Murray networks in synthetic materials due to the challenges in fabricating vascularized structures. Here we emulate optimum natural systems following Murray's law using a bottom-up approach. Such bio-inspired materials, whose pore sizes decrease across multiple scales and finally terminate in size-invariant units like plant stems, leaf veins and vascular and respiratory systems provide hierarchical branching and precise diameter ratios for connecting multi-scale pores from macro to micro levels. Our Murray material mimics enable highly enhanced mass exchange and transfer in liquid-solid, gas-solid and electrochemical reactions and exhibit enhanced performance in photocatalysis, gas sensing and as Li-ion battery electrodes.
Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying
2014-01-01
Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Plis, Sergey M; Sarwate, Anand D; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R; Turner, Jessica A; Shoemaker, Jody M; Carter, Kim W; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and "closed" repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to "pooled-data" solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions.
Plis, Sergey M.; Sarwate, Anand D.; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R.; Turner, Jessica A.; Shoemaker, Jody M.; Carter, Kim W.; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D.
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and “closed” repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to “pooled-data” solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions. PMID:27594820
Huang, Chun; Zhang, Jin; Young, Neil P; Snaith, Henry J; Grant, Patrick S
2016-05-10
Supercapacitors are in demand for short-term electrical charge and discharge applications. Unlike conventional supercapacitors, solid-state versions have no liquid electrolyte and do not require robust, rigid packaging for containment. Consequently they can be thinner, lighter and more flexible. However, solid-state supercapacitors suffer from lower power density and where new materials have been developed to improve performance, there remains a gap between promising laboratory results that usually require nano-structured materials and fine-scale processing approaches, and current manufacturing technology that operates at large scale. We demonstrate a new, scalable capability to produce discrete, multi-layered electrodes with a different material and/or morphology in each layer, and where each layer plays a different, critical role in enhancing the dynamics of charge/discharge. This layered structure allows efficient utilisation of each material and enables conservative use of hard-to-obtain materials. The layered electrode shows amongst the highest combinations of energy and power densities for solid-state supercapacitors. Our functional design and spray manufacturing approach to heterogeneous electrodes provide a new way forward for improved energy storage devices.
Heider, Katharina; Lopez, Juan Miguel Rodriguez; Scheffran, Jürgen
2018-03-14
Due to the availability of Web 2.0 technologies, volunteered geographic information (VGI) is on the rise. This new type of data is available on many topics and on different scales. Thus, it has become interesting for research. This article deals with the collective potential of VGI and remote sensing to detect peri-urbanization in the conservation zone of Mexico City. On the one hand, remote sensing identifies horizontal urban expansion, and on the other hand, VGI of ecological complaints provides data about informal settlements. This enables the combination of top-down approaches (remote sensing) and bottom-up approaches (ecological complaints). Within the analysis, we identify areas of high urbanization as well as complaint densities and bring them together in a multi-scale analysis using Geographic Information Systems (GIS). Furthermore, we investigate the influence of settlement patterns and main roads on the peri-urbanization process in Mexico City using OpenStreetMap. Peri-urbanization is detected especially in the transition zone between the urban and rural (conservation) area and near main roads as well as settlements.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
NASA Astrophysics Data System (ADS)
Senarathna, Janaka; Hadjiabadi, Darian; Gil, Stacy; Thakor, Nitish V.; Pathak, Arvind P.
2017-02-01
Different brain regions exhibit complex information processing even at rest. Therefore, assessing temporal correlations between regions permits task-free visualization of their `resting state connectivity'. Although functional MRI (fMRI) is widely used for mapping resting state connectivity in the human brain, it is not well suited for `microvascular scale' imaging in rodents because of its limited spatial resolution. Moreover, co-registered cerebral blood flow (CBF) and total hemoglobin (HbT) data are often unavailable in conventional fMRI experiments. Therefore, we built a customized system that combines laser speckle contrast imaging (LSCI), intrinsic optical signal (IOS) imaging and fluorescence imaging (FI) to generate multi-contrast functional connectivity maps at a spatial resolution of 10 μm. This system comprised of three illumination sources: a 632 nm HeNe laser (for LSCI), a 570 nm ± 5 nm filtered white light source (for IOS), and a 473 nm blue laser (for FI), as well as a sensitive CCD camera operating at 10 frames per second for image acquisition. The acquired data enabled visualization of changes in resting state neurophysiology at microvascular spatial scales. Moreover, concurrent mapping of CBF and HbT-based temporal correlations enabled in vivo mapping of how resting brain regions were linked in terms of their hemodynamics. Additionally, we complemented this approach by exploiting the transit times of a fluorescent tracer (Dextran-FITC) to distinguish arterial from venous perfusion. Overall, we demonstrated the feasibility of wide area mapping of resting state connectivity at microvascular resolution and created a new toolbox for interrogating neurovascular function.
Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng
2014-01-01
Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William D; Johansen, Hans; Evans, Katherine J
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Mazari-Hiriart, Marisa; Cruz-Bello, Gustavo; Bojórquez-Tapia, Luis A; Juárez-Marusich, Lourdes; Alcantar-López, Georgina; Marín, Luis E; Soto-Galera, Ernesto
2006-03-01
This study was based on a groundwater vulnerability assessment approach implemented for the Mexico City Metropolitan Area (MCMA). The approach is based on a fuzzy multi-criteria procedure integrated in a geographic information system. The approach combined the potential contaminant sources with the permeability of geological materials. Initially, contaminant sources were ranked by experts through the Analytic Hierarchy Process. An aggregated contaminant sources map layer was obtained through the simple additive weighting method, using a scalar multiplication of criteria weights and binary maps showing the location of each source. A permeability map layer was obtained through the reclassification of a geology map using the respective hydraulic conductivity values, followed by a linear normalization of these values against a compatible scale. A fuzzy logic procedure was then applied to transform and combine the two map layers, resulting in a groundwater vulnerability map layer of five classes: very low, low, moderate, high, and very high. Results provided a more coherent assessment of the policy-making priorities considered when discussing the vulnerability of groundwater to organic compounds. The very high and high vulnerability areas covered a relatively small area (71 km(2) or 1.5% of the total study area), allowing the identification of the more critical locations. The advantage of a fuzzy logic procedure is that it enables the best possible use to be made of the information available regarding groundwater vulnerability in the MCMA.
Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis
NASA Technical Reports Server (NTRS)
Olevsky, Eugene; German, Randall M.
2012-01-01
A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Management and assimilation of diverse, distributed watershed datasets
NASA Astrophysics Data System (ADS)
Varadharajan, C.; Faybishenko, B.; Versteeg, R.; Agarwal, D.; Hubbard, S. S.; Hendrix, V.
2016-12-01
The U.S. Department of Energy's (DOE) Watershed Function Scientific Focus Area (SFA) seeks to determine how perturbations to mountainous watersheds (e.g., floods, drought, early snowmelt) impact the downstream delivery of water, nutrients, carbon, and metals over seasonal to decadal timescales. We are building a software platform that enables integration of diverse and disparate field, laboratory, and simulation datasets, of various types including hydrological, geological, meteorological, geophysical, geochemical, ecological and genomic datasets across a range of spatial and temporal scales within the Rifle floodplain and the East River watershed, Colorado. We are using agile data management and assimilation approaches, to enable web-based integration of heterogeneous, multi-scale dataSensor-based observations of water-level, vadose zone and groundwater temperature, water quality, meteorology as well as biogeochemical analyses of soil and groundwater samples have been curated and archived in federated databases. Quality Assurance and Quality Control (QA/QC) are performed on priority datasets needed for on-going scientific analyses, and hydrological and geochemical modeling. Automated QA/QC methods are used to identify and flag issues in the datasets. Data integration is achieved via a brokering service that dynamically integrates data from distributed databases via web services, based on user queries. The integrated results are presented to users in a portal that enables intuitive search, interactive visualization and download of integrated datasets. The concepts, approaches and codes being used are shared across various data science components of various large DOE-funded projects such as the Watershed Function SFA, Next Generation Ecosystem Experiment (NGEE) Tropics, Ameriflux/FLUXNET, and Advanced Simulation Capability for Environmental Management (ASCEM), and together contribute towards DOE's cyberinfrastructure for data management and model-data integration.
Multi-layer plastic/glass microfluidic systems containing electrical and mechanical functionality.
Han, Arum; Wang, Olivia; Graff, Mason; Mohanty, Swomitra K; Edwards, Thayne L; Han, Ki-Ho; Bruno Frazier, A
2003-08-01
This paper describes an approach for fabricating multi-layer microfluidic systems from a combination of glass and plastic materials. Methods and characterization results for the microfabrication technologies underlying the process flow are presented. The approach is used to fabricate and characterize multi-layer plastic/glass microfluidic systems containing electrical and mechanical functionality. Hot embossing, heat staking of plastics, injection molding, microstenciling of electrodes, and stereolithography were combined with conventional MEMS fabrication techniques to realize the multi-layer systems. The approach enabled the integration of multiple plastic/glass materials into a single monolithic system, provided a solution for the integration of electrical functionality throughout the system, provided a mechanism for the inclusion of microactuators such as micropumps/valves, and provided an interconnect technology for interfacing fluids and electrical components between the micro system and the macro world.
Development of a Renormalization Group Approach to Multi-Scale Plasma Physics Computation
2012-03-28
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a . REPORT...code) 29-12-2008 Final Technical Report From 29-12-2008 To 16-95-2011 (STTR PHASE II) DEVELOPMENT OF A RENORMALIZATION GROUP APPROACH TO MULTI-SCALE
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Final Report, “Exploiting Global View for Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Andrew
2017-03-29
Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.
Integrated microsystems packaging approach with LCP
NASA Astrophysics Data System (ADS)
Jaynes, Paul; Shacklette, Lawrence W.
2006-05-01
Within the government communication market there is an increasing push to further miniaturize systems with the use of chip-scale packages, flip-chip bonding, and other advances over traditional packaging techniques. Harris' approach to miniaturization includes these traditional packaging advances, but goes beyond this level of miniaturization by combining the functional and structural elements of a system, thus creating a Multi-Functional Structural Circuit (MFSC). An emerging high-frequency, near hermetic, thermoplastic electronic substrate material, Liquid Crystal Polymer (LCP), is the material that will enable the combination of the electronic circuit and the physical structure of the system. The first embodiment of this vision for Harris is the development of a battlefield acoustic sensor module. This paper will introduce LCP and its advantages for MFSC, present an example of the work that Harris has performed, and speak to LCP MFSCs' potential benefits to miniature communications modules and sensor platforms.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data
Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia
2017-01-01
Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.
NASA Astrophysics Data System (ADS)
Dee, S. G.; Parsons, L. A.; Loope, G. R.; Overpeck, J. T.; Ault, T. R.; Emile-Geay, J.
2017-10-01
The spectral characteristics of paleoclimate observations spanning the last millennium suggest the presence of significant low-frequency (multi-decadal to centennial scale) variability in the climate system. Since this low-frequency climate variability is critical for climate predictions on societally-relevant scales, it is essential to establish whether General Circulation models (GCMs) are able to simulate it faithfully. Recent studies find large discrepancies between models and paleoclimate data at low frequencies, prompting concerns surrounding the ability of GCMs to predict long-term, high-magnitude variability under greenhouse forcing (Laepple and Huybers, 2014a, 2014b). However, efforts to ground climate model simulations directly in paleoclimate observations are impeded by fundamental differences between models and the proxy data: proxy systems often record a multivariate and/or nonlinear response to climate, precluding a direct comparison to GCM output. In this paper we bridge this gap via a forward proxy modeling approach, coupled to an isotope-enabled GCM. This allows us to disentangle the various contributions to signals embedded in ice cores, speleothem calcite, coral aragonite, tree-ring width, and tree-ring cellulose. The paper addresses the following questions: (1) do forward-modeled ;pseudoproxies; exhibit variability comparable to proxy data? (2) if not, which processes alter the shape of the spectrum of simulated climate variability, and are these processes broadly distinguishable from climate? We apply our method to representative case studies, and broaden these insights with an analysis of the PAGES2k database (PAGES2K Consortium, 2013). We find that current proxy system models (PSMs) can help resolve model-data discrepancies on interannual to decadal timescales, but cannot account for the mismatch in variance on multi-decadal to centennial timescales. We conclude that, specific to this set of PSMs and isotope-enabled model, the paleoclimate record may exhibit larger low-frequency variability than GCMs currently simulate, indicative of incomplete physics and/or forcings.
A multi-frequency receiver function inversion approach for crustal velocity structure
NASA Astrophysics Data System (ADS)
Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian
2017-05-01
In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Flat nonlinear optics: metasurfaces for efficient frequency mixing
NASA Astrophysics Data System (ADS)
Nookala, Nishant; Lee, Jongwon; Liu, Yingnan; Bishop, Wells; Tymchenko, Mykhailo; Gomez-Diaz, J. Sebastian; Demmerle, Frederic; Boehm, Gerhard; Amann, Markus-Christian; Wolf, Omri; Brener, Igal; Alu, Andrea; Belkin, Mikhail A.
2017-02-01
Gradient metasurfaces, or ultrathin optical components with engineered transverse impedance gradients along the surface, are able to locally control the phase and amplitude of the scattered fields over subwavelength scales, enabling a broad range of linear components in a flat, integrable platform1-4. On the contrary, due to the weakness of their nonlinear optical responses, conventional nonlinear optical components are inherently bulky, with stringent requirements associated with phase matching and poor control over the phase and amplitude of the generated beam. Nonlinear metasurfaces have been recently proposed to enable frequency conversion in thin films without phase-matching constraints and subwavelength control of the local nonlinear phase5-8. However, the associated optical nonlinearities are far too small to produce significant nonlinear conversion efficiency and compete with conventional nonlinear components for pump intensities below the materials damage threshold. Here, we report multi-quantum-well based gradient nonlinear metasurfaces with second-order nonlinear susceptibility over 106 pm/V for second harmonic generation at a fundamental pump wavelength of 10 μm, 5-6 orders of magnitude larger than traditional crystals. Further, we demonstrate the efficacy of this approach to designing metasurfaces optimized for frequency conversion over a large range of wavelengths, by reporting multi-quantum-well and metasurface structures optimized for a pump wavelength of 6.7 μm. Finally, we demonstrate how the phase of this nonlinearly generated light can be locally controlled well below the diffraction limit using the Pancharatnam-Berry phase approach5,7,9, opening a new paradigm for ultrathin, flat nonlinear optical components.
Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model
NASA Astrophysics Data System (ADS)
Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.
2012-12-01
The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that the system's structure generates its behavior; and STELLA®'s graphical interface allows researchers at multiple educational levels to observe patterns and trends as the system changes over time. Graduate students and postdoctoral researchers will utilize these initial models to more efficiently communicate and transfer knowledge across disciplines prior to generating more novel and complex disease risk models. The hope is that these models will improve causal viewpoints, understanding of the system patterns, and how to best mitigate disease risk across multiple spatial scales. Yasar O, Landau RH (2003) Elements of computational science and engineering education. Siam Review 45(4): 787-805.
Multi-scale functional mapping of tidal marsh vegetation for restoration monitoring
NASA Astrophysics Data System (ADS)
Tuxen Bettman, Karin
2007-12-01
Nearly half of the world's natural wetlands have been destroyed or degraded, and in recent years, there have been significant endeavors to restore wetland habitat throughout the world. Detailed mapping of restoring wetlands can offer valuable information about changes in vegetation and geomorphology, which can inform the restoration process and ultimately help to improve chances of restoration success. I studied six tidal marshes in the San Francisco Estuary, CA, US, between 2003 and 2004 in order to develop techniques for mapping tidal marshes at multiple scales by incorporating specific restoration objectives for improved longer term monitoring. I explored a "pixel-based" remote sensing image analysis method for mapping vegetation in restored and natural tidal marshes, describing the benefits and limitations of this type of approach (Chapter 2). I also performed a multi-scale analysis of vegetation pattern metrics for a recently restored tidal marsh in order to target the metrics that are consistent across scales and will be robust measures of marsh vegetation change (Chapter 3). Finally, I performed an "object-based" image analysis using the same remotely sensed imagery, which maps vegetation type and specific wetland functions at multiple scales (Chapter 4). The combined results of my work highlight important trends and management implications for monitoring wetland restoration using remote sensing, and will better enable restoration ecologists to use remote sensing for tidal marsh monitoring. Several findings important for tidal marsh restoration monitoring were made. Overall results showed that pixel-based methods are effective at quantifying landscape changes in composition and diversity in recently restored marshes, but are limited in their use for quantifying smaller, more fine-scale changes. While pattern metrics can highlight small but important changes in vegetation composition and configuration across years, scientists should exercise caution when using metrics in their studies or to validate restoration management decisions, and multi-scale analyses should be performed before metrics are used in restoration science for important management decisions. Lastly, restoration objectives, ecosystem function, and scale can each be integrated into monitoring techniques using remote sensing for improved restoration monitoring.
Continuous micron-scaled rope engineering using a rotating multi-nozzle electrospinning emitter
NASA Astrophysics Data System (ADS)
Zhang, Chunchen; Gao, Chengcheng; Chang, Ming-Wei; Ahmad, Zeeshan; Li, Jing-Song
2016-10-01
Electrospinning (ES) enables simple production of fibers for broad applications (e.g., biomedical engineering, energy storage, and electronics). However, resulting structures are predominantly random; displaying significant disordered fiber entanglement, which inevitably gives rise to structural variations and reproducibility on the micron scale. Surface and structural features on this scale are critical for biomaterials, tissue engineering, and pharmaceutical sciences. In this letter, a modified ES technique using a rotating multi-nozzle emitter is developed and utilized to fabricate continuous micron-scaled polycaprolactone (PCL) ropes, providing control on fiber intercalation (twist) and structural order. Micron-scaled ropes comprising 312 twists per millimeter are generated, and rope diameter and pitch length are regulated using polymer concentration and process parameters. Electric field simulations confirm vector and distribution mechanisms, which influence fiber orientation and deposition during the process. The modified fabrication system provides much needed control on reproducibility and fiber entanglement which is crucial for electrospun biomedical materials.
Donovan, Sarah-Louise; Salmon, Paul M; Lenné, Michael G; Horberry, Tim
2017-10-01
Safety leadership is an important factor in supporting safety in high-risk industries. This article contends that applying systems-thinking methods to examine safety leadership can support improved learning from incidents. A case study analysis was undertaken of a large-scale mining landslide incident in which no injuries or fatalities were incurred. A multi-method approach was adopted, in which the Critical Decision Method, Rasmussen's Risk Management Framework and Accimap method were applied to examine the safety leadership decisions and actions which enabled the safe outcome. The approach enabled Rasmussen's predictions regarding safety and performance to be examined in the safety leadership context, with findings demonstrating the distribution of safety leadership across leader and system levels, and the presence of vertical integration as key to supporting the successful safety outcome. In doing so, the findings also demonstrate the usefulness of applying systems-thinking methods to examine and learn from incidents in terms of what 'went right'. The implications, including future research directions, are discussed. Practitioner Summary: This paper presents a case study analysis, in which systems-thinking methods are applied to the examination of safety leadership decisions and actions during a large-scale mining landslide incident. The findings establish safety leadership as a systems phenomenon, and furthermore, demonstrate the usefulness of applying systems-thinking methods to learn from incidents in terms of what 'went right'. Implications, including future research directions, are discussed.
NASA Astrophysics Data System (ADS)
Rosen, P. A.; Gurrola, E. M.; Sacco, G. F.; Agram, P. S.; Lavalle, M.; Zebker, H. A.
2014-12-01
The NASA ESTO-developed InSAR Scientific Computing Environment (ISCE) provides acomputing framework for geodetic image processing for InSAR sensors that ismodular, flexible, and extensible, enabling scientists to reduce measurementsdirectly from a diverse array of radar satellites and aircraft to newgeophysical products. ISCE can serve as the core of a centralized processingcenter to bring Level-0 raw radar data up to Level-3 data products, but isadaptable to alternative processing approaches for science users interested innew and different ways to exploit mission data. This is accomplished throughrigorous componentization of processing codes, abstraction and generalization ofdata models, and a xml-based input interface with multi-level prioritizedcontrol of the component configurations depending on the science processingcontext. The proposed NASA-ISRO SAR (NISAR) Mission would deliver data ofunprecedented quantity and quality, making possible global-scale studies inclimate research, natural hazards, and Earth's ecosystems. ISCE is planned tobecome a key element in processing projected NISAR data into higher level dataproducts, enabling a new class of analyses that take greater advantage of thelong time and large spatial scales of these new data than current approaches.NISAR would be but one mission in a constellation of radar satellites in thefuture delivering such data. ISCE has been incorporated into two prototypecloud-based systems that have demonstrated its elasticity to addressing largerdata processing problems in a "production" context and its ability to becontrolled by individual science users on the cloud for large data problems.
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
Application Perspective of 2D+SCALE Dimension
NASA Astrophysics Data System (ADS)
Karim, H.; Rahman, A. Abdul
2016-09-01
Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.
An IT-enabled supply chain model: a simulation study
NASA Astrophysics Data System (ADS)
Cannella, Salvatore; Framinan, Jose M.; Barbosa-Póvoa, Ana
2014-11-01
During the last decades, supply chain collaboration practices and the underlying enabling technologies have evolved from the classical electronic data interchange (EDI) approach to a web-based and radio frequency identification (RFID)-enabled collaboration. In this field, most of the literature has focused on the study of optimal parameters for reducing the total cost of suppliers, by adopting operational research (OR) techniques. Herein we are interested in showing that the considered information technology (IT)-enabled structure is resilient, that is, it works well across a reasonably broad range of parameter settings. By adopting a methodological approach based on system dynamics, we study a multi-tier collaborative supply chain. Results show that the IT-enabled supply chain improves operational performance and customer service level. Nonetheless, benefits for geographically dispersed networks are of minor entity.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping
2017-03-01
A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
NASA Technical Reports Server (NTRS)
Engin, Doruk; Mathason, Brian; Stephen, Mark; Yu, Anthony; Cao, He; Fouron, Jean-Luc; Storm, Mark
2016-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gotwals, John K.; Dunn, John G. H.
2009-01-01
This article presents a chronology of three empirical studies that outline the measurement process by which two new subscales ("Doubts about Actions" and "Organization") were developed and integrated into a revised version of Dunn, Causgrove Dunn, and Syrotuik's (2002) "Sport Multidimensional Perfectionism Scale"…
A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.
Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien
2017-01-01
Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.
NASA Astrophysics Data System (ADS)
Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.
2013-12-01
The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental concepts of this method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters. This new method has been successfully applied to achieve a highly accurate calibration of the 16 Dual Electron Spectrometers and 16 Dual Ion Spectrometers of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown how this method will be applied to ensure the best possible in flight calibration during the mission.
Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection
Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei
2012-01-01
Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647
Determining the Supply of Material Resources for High-Rise Construction: Scenario Approach
NASA Astrophysics Data System (ADS)
Minnullina, Anna; Vasiliev, Vladimir
2018-03-01
This article presents a multi-criteria approach to determining the supply of material resources for high-rise construction under certain and uncertain conditions, which enables integrating a number of existing models into a fairly compact generalised economic and mathematical model developed for two extreme scenarios.
Peng, Huan-Kai; Marculescu, Radu
2015-01-01
Social media exhibit rich yet distinct temporal dynamics which cover a wide range of different scales. In order to study this complex dynamics, two fundamental questions revolve around (1) the signatures of social dynamics at different time scales, and (2) the way in which these signatures interact and form higher-level meanings. In this paper, we propose the Recursive Convolutional Bayesian Model (RCBM) to address both of these fundamental questions. The key idea behind our approach consists of constructing a deep-learning framework using specialized convolution operators that are designed to exploit the inherent heterogeneity of social dynamics. RCBM's runtime and convergence properties are guaranteed by formal analyses. Experimental results show that the proposed method outperforms the state-of-the-art approaches both in terms of solution quality and computational efficiency. Indeed, by applying the proposed method on two social network datasets, Twitter and Yelp, we are able to identify the compositional structures that can accurately characterize the complex social dynamics from these two social media. We further show that identifying these patterns can enable new applications such as anomaly detection and improved social dynamics forecasting. Finally, our analysis offers new insights on understanding and engineering social media dynamics, with direct applications to opinion spreading and online content promotion.
Peng, Huan-Kai; Marculescu, Radu
2015-01-01
Objective Social media exhibit rich yet distinct temporal dynamics which cover a wide range of different scales. In order to study this complex dynamics, two fundamental questions revolve around (1) the signatures of social dynamics at different time scales, and (2) the way in which these signatures interact and form higher-level meanings. Method In this paper, we propose the Recursive Convolutional Bayesian Model (RCBM) to address both of these fundamental questions. The key idea behind our approach consists of constructing a deep-learning framework using specialized convolution operators that are designed to exploit the inherent heterogeneity of social dynamics. RCBM’s runtime and convergence properties are guaranteed by formal analyses. Results Experimental results show that the proposed method outperforms the state-of-the-art approaches both in terms of solution quality and computational efficiency. Indeed, by applying the proposed method on two social network datasets, Twitter and Yelp, we are able to identify the compositional structures that can accurately characterize the complex social dynamics from these two social media. We further show that identifying these patterns can enable new applications such as anomaly detection and improved social dynamics forecasting. Finally, our analysis offers new insights on understanding and engineering social media dynamics, with direct applications to opinion spreading and online content promotion. PMID:25830775
Single Cell Multi-Omics Technology: Methodology and Application.
Hu, Youjin; An, Qin; Sheu, Katherine; Trejo, Brandon; Fan, Shuxin; Guo, Ying
2018-01-01
In the era of precision medicine, multi-omics approaches enable the integration of data from diverse omics platforms, providing multi-faceted insight into the interrelation of these omics layers on disease processes. Single cell sequencing technology can dissect the genotypic and phenotypic heterogeneity of bulk tissue and promises to deepen our understanding of the underlying mechanisms governing both health and disease. Through modification and combination of single cell assays available for transcriptome, genome, epigenome, and proteome profiling, single cell multi-omics approaches have been developed to simultaneously and comprehensively study not only the unique genotypic and phenotypic characteristics of single cells, but also the combined regulatory mechanisms evident only at single cell resolution. In this review, we summarize the state-of-the-art single cell multi-omics methods and discuss their applications, challenges, and future directions.
Single Cell Multi-Omics Technology: Methodology and Application
Hu, Youjin; An, Qin; Sheu, Katherine; Trejo, Brandon; Fan, Shuxin; Guo, Ying
2018-01-01
In the era of precision medicine, multi-omics approaches enable the integration of data from diverse omics platforms, providing multi-faceted insight into the interrelation of these omics layers on disease processes. Single cell sequencing technology can dissect the genotypic and phenotypic heterogeneity of bulk tissue and promises to deepen our understanding of the underlying mechanisms governing both health and disease. Through modification and combination of single cell assays available for transcriptome, genome, epigenome, and proteome profiling, single cell multi-omics approaches have been developed to simultaneously and comprehensively study not only the unique genotypic and phenotypic characteristics of single cells, but also the combined regulatory mechanisms evident only at single cell resolution. In this review, we summarize the state-of-the-art single cell multi-omics methods and discuss their applications, challenges, and future directions. PMID:29732369
Assessing Inter-Sectoral Climate Change Risks: The Role of ISIMIP
NASA Technical Reports Server (NTRS)
Rosenzweig, Cynthia; Arnell, Nigel W.; Ebi, Kristie L.; Lotze-Campen, Hermann; Raes, Frank; Rapley, Chris; Smith, Mark Stafford; Cramer, Wolfgang; Frieler, Katja; Reyer, Christopher P. O.;
2017-01-01
The aims of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) are to provide a framework for the intercomparison of global and regional-scale risk models within and across multiple sectors and to enable coordinated multi-sectoral assessments of different risks and their aggregated effects. The overarching goal is to use the knowledge gained to support adaptation and mitigation decisions that require regional or global perspectives within the context of facilitating transformations to enable sustainable development, despite inevitable climate shifts and disruptions. ISIMIP uses community-agreed sets of scenarios with standardized climate variables and socioeconomic projections as inputs for projecting future risks and associated uncertainties, within and across sectors. The results are consistent multi-model assessments of sectoral risks and opportunities that enable studies that integrate across sectors, providing support for implementation of the Paris Agreement under the United Nations Framework Convention on Climate Change.
Sparsity enabled cluster reduced-order models for control
NASA Astrophysics Data System (ADS)
Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.
2018-01-01
Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.
Dimitrova, N; Nagaraj, A B; Razi, A; Singh, S; Kamalakaran, S; Banerjee, N; Joseph, P; Mankovich, A; Mittal, P; DiFeo, A; Varadan, V
2017-04-27
Characterizing the complex interplay of cellular processes in cancer would enable the discovery of key mechanisms underlying its development and progression. Published approaches to decipher driver mechanisms do not explicitly model tissue-specific changes in pathway networks and the regulatory disruptions related to genomic aberrations in cancers. We therefore developed InFlo, a novel systems biology approach for characterizing complex biological processes using a unique multidimensional framework integrating transcriptomic, genomic and/or epigenomic profiles for any given cancer sample. We show that InFlo robustly characterizes tissue-specific differences in activities of signalling networks on a genome scale using unique probabilistic models of molecular interactions on a per-sample basis. Using large-scale multi-omics cancer datasets, we show that InFlo exhibits higher sensitivity and specificity in detecting pathway networks associated with specific disease states when compared to published pathway network modelling approaches. Furthermore, InFlo's ability to infer the activity of unmeasured signalling network components was also validated using orthogonal gene expression signatures. We then evaluated multi-omics profiles of primary high-grade serous ovarian cancer tumours (N=357) to delineate mechanisms underlying resistance to frontline platinum-based chemotherapy. InFlo was the only algorithm to identify hyperactivation of the cAMP-CREB1 axis as a key mechanism associated with resistance to platinum-based therapy, a finding that we subsequently experimentally validated. We confirmed that inhibition of CREB1 phosphorylation potently sensitized resistant cells to platinum therapy and was effective in killing ovarian cancer stem cells that contribute to both platinum-resistance and tumour recurrence. Thus, we propose InFlo to be a scalable and widely applicable and robust integrative network modelling framework for the discovery of evidence-based biomarkers and therapeutic targets.
Multi-scale computational modeling of developmental biology.
Setty, Yaki
2012-08-01
Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.
A multi-scale Q1/P0 approach to langrangian shock hydrodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shashkov, Mikhail; Love, Edward; Scovazzi, Guglielmo
A new multi-scale, stabilized method for Q1/P0 finite element computations of Lagrangian shock hydrodynamics is presented. Instabilities (of hourglass type) are controlled by a stabilizing operator derived using the variational multi-scale analysis paradigm. The resulting stabilizing term takes the form of a pressure correction. With respect to currently implemented hourglass control approaches, the novelty of the method resides in its residual-based character. The stabilizing residual has a definite physical meaning, since it embeds a discrete form of the Clausius-Duhem inequality. Effectively, the proposed stabilization samples and acts to counter the production of entropy due to numerical instabilities. The proposed techniquemore » is applicable to materials with no shear strength, for which there exists a caloric equation of state. The stabilization operator is incorporated into a mid-point, predictor/multi-corrector time integration algorithm, which conserves mass, momentum and total energy. Encouraging numerical results in the context of compressible gas dynamics confirm the potential of the method.« less
Multi-scale imaging and informatics pipeline for in situ pluripotent stem cell analysis.
Gorman, Bryan R; Lu, Junjie; Baccei, Anna; Lowry, Nathan C; Purvis, Jeremy E; Mangoubi, Rami S; Lerou, Paul H
2014-01-01
Human pluripotent stem (hPS) cells are a potential source of cells for medical therapy and an ideal system to study fate decisions in early development. However, hPS cells cultured in vitro exhibit a high degree of heterogeneity, presenting an obstacle to clinical translation. hPS cells grow in spatially patterned colony structures, necessitating quantitative single-cell image analysis. We offer a tool for analyzing the spatial population context of hPS cells that integrates automated fluorescent microscopy with an analysis pipeline. It enables high-throughput detection of colonies at low resolution, with single-cellular and sub-cellular analysis at high resolutions, generating seamless in situ maps of single-cellular data organized by colony. We demonstrate the tool's utility by analyzing inter- and intra-colony heterogeneity of hPS cell cycle regulation and pluripotency marker expression. We measured the heterogeneity within individual colonies by analyzing cell cycle as a function of distance. Cells loosely associated with the outside of the colony are more likely to be in G1, reflecting a less pluripotent state, while cells within the first pluripotent layer are more likely to be in G2, possibly reflecting a G2/M block. Our multi-scale analysis tool groups colony regions into density classes, and cells belonging to those classes have distinct distributions of pluripotency markers and respond differently to DNA damage induction. Lastly, we demonstrate that our pipeline can robustly handle high-content, high-resolution single molecular mRNA FISH data by using novel image processing techniques. Overall, the imaging informatics pipeline presented offers a novel approach to the analysis of hPS cells that includes not only single cell features but also colony wide, and more generally, multi-scale spatial configuration.
A Matter of Scale: Multi-Scale Ethnographic Research on Education in the United States
ERIC Educational Resources Information Center
Eisenhart, Margaret
2017-01-01
In recent years, cultural anthropologists conducting educational ethnographies in the US have pursued some new methodological approaches. These new approaches can be attributed to advances in cultural theory, evolving norms of research practice, and the affordances of new technologies. In this article, I review three such approaches under the…
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
The LSST Data Mining Research Agenda
NASA Astrophysics Data System (ADS)
Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.
2008-12-01
We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.
Advancing working and learning through critical action research: creativity and constraints.
Bellman, Loretta; Bywood, Catherine; Dale, Susan
2003-12-01
Continuous professional development is an essential component within many health care 'Learning Organisations'. The paper describes the first phase of an initiative to develop a professional practice development framework for nurses in an NHS general hospital. The project was undertaken within a critical action research methodology. A tripartite arrangement between the hospital, a university and professional nursing organisation enabled clinical, educational and research support for the nurses (co-researchers) engaged in the project. Initial challenges were from some managers, educationalists and the ethics committee who did not appear to understand the action research process. A multi-method approach to data collection was undertaken to capture the change process from different stakeholders' perceptions. Triangulation of the data was undertaken. Despite organisational constraints, transformational leadership and peer support enabled the co-researchers to identify and initiate three patient-focused initiatives. The change process for the co-researchers included: enlightening personal journey, exploring the research-practice gap, enhancing personal and professional knowledge, evolving cultural change and collaborative working, empowering and disempowering messages. A hospital merger and corporate staff changes directly impacted on the project. A more flexible time-scale and longer term funding are required to enable continuity for trust-wide projects undertaken in dynamic clinical settings.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Multi-scale modelling of elastic moduli of trabecular bone
Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz
2012-01-01
We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
A multi-scale framework to link remotely sensed metrics with socioeconomic data
NASA Astrophysics Data System (ADS)
Watmough, Gary; Svenning, Jens-Christian; Palm, Cheryl; Sullivan, Clare; Danylo, Olha; McCallum, Ian
2017-04-01
There is increasing interest in the use of remotely sensed satellite data for estimating human poverty as it can bridge data gaps that prevent fine scale monitoring of development goals across large areas. The ways in which metrics derived from satellite imagery are linked with socioeconomic data are crucial for accurate estimation of poverty. Yet, to date, approaches in the literature linking satellite metrics with socioeconomic data are poorly characterized. Typically, approaches use a GIS approach such as circular buffer zones around a village or household or an administrative boundary such as a district or census enumeration area. These polygons are then used to extract environmental data from satellite imagery and related to the socioeconomic data in statistical analyses. The use of a single polygon to link environment and socioeconomic data is inappropriate in coupled human-natural systems as processes operate over multiple scales. Human interactions with the environment occur at multiple levels from individual (household) access to agricultural plots adjacent to homes, to communal access to common pool resources (CPR) such as forests at the village level. Here, we present a multi-scale framework that explicitly considers how people use the landscape. The framework is presented along with a case study example in Kenya. The multi-scale approach could enhance the modelling of human-environment interactions which will have important consequences for monitoring the sustainable development goals for human livelihoods and biodiversity conservation.
A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations
Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley; ...
2017-06-20
Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less
A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley
Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
NASA Astrophysics Data System (ADS)
Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.
2008-12-01
Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.
NASA Astrophysics Data System (ADS)
Hamann, S.; Börner, K.; Burlacov, I.; Spies, H.-J.; Strämke, M.; Strämke, S.; Röpcke, J.
2015-12-01
A laboratory scale plasma nitriding monitoring reactor (PLANIMOR) has been designed to study the basics of active screen plasma nitriding (ASPN) processes. PLANIMOR consists of a tube reactor vessel, made of borosilicate glass, enabling optical emission spectroscopy (OES) and infrared absorption spectroscopy. The linear setup of the electrode system of the reactor has the advantages to apply the diagnostic approaches on each part of the plasma process, separately. Furthermore, possible changes of the electrical field and of the heat generation, as they could appear in down-scaled cylindrical ASPN reactors, are avoided. PLANIMOR has been used for the nitriding of steel samples, achieving similar results as in an industrial scale ASPN reactor. A compact spectrometer using an external cavity quantum cascade laser combined with an optical multi-pass cell has been applied for the detection of molecular reaction products. This allowed the determination of the concentrations of four stable molecular species (CH4, C2H2, HCN, and NH3). With the help of OES, the rotational temperature of the screen plasma could be determined.
GPU-accelerated Red Blood Cells Simulations with Transport Dissipative Particle Dynamics.
Blumers, Ansel L; Tang, Yu-Hang; Li, Zhen; Li, Xuejin; Karniadakis, George E
2017-08-01
Mesoscopic numerical simulations provide a unique approach for the quantification of the chemical influences on red blood cell functionalities. The transport Dissipative Particles Dynamics (tDPD) method can lead to such effective multiscale simulations due to its ability to simultaneously capture mesoscopic advection, diffusion, and reaction. In this paper, we present a GPU-accelerated red blood cell simulation package based on a tDPD adaptation of our red blood cell model, which can correctly recover the cell membrane viscosity, elasticity, bending stiffness, and cross-membrane chemical transport. The package essentially processes all computational workloads in parallel by GPU, and it incorporates multi-stream scheduling and non-blocking MPI communications to improve inter-node scalability. Our code is validated for accuracy and compared against the CPU counterpart for speed. Strong scaling and weak scaling are also presented to characterizes scalability. We observe a speedup of 10.1 on one GPU over all 16 cores within a single node, and a weak scaling efficiency of 91% across 256 nodes. The program enables quick-turnaround and high-throughput numerical simulations for investigating chemical-driven red blood cell phenomena and disorders.
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
NASA Technical Reports Server (NTRS)
Sullivan, Steven J.
2014-01-01
"Rocket University" is an exciting new initiative at Kennedy Space Center led by NASA's Engineering and Technology Directorate. This hands-on experience has been established to develop, refine & maintain targeted flight engineering skills to enable the Agency and KSC strategic goals. Through "RocketU", KSC is developing a nimble, rapid flight engineering life cycle systems knowledge base. Ongoing activities in RocketU develop and test new technologies and potential customer systems through small scale vehicles, build and maintain flight experience through balloon and small-scale rocket missions, and enable a revolving fresh perspective of engineers with hands on expertise back into the large scale NASA programs, providing a more experienced multi-disciplined set of systems engineers. This overview will define the Program, highlight aspects of the training curriculum, and identify recent accomplishments and activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
NASA Astrophysics Data System (ADS)
Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.
2014-12-01
Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.
Tools and Approaches for the Construction of Knowledge Models from the Neuroscientific Literature
Burns, Gully A. P. C.; Khan, Arshad M.; Ghandeharizadeh, Shahram; O’Neill, Mark A.; Chen, Yi-Shin
2015-01-01
Within this paper, we describe a neuroinformatics project (called “NeuroScholar,” http://www.neuroscholar.org/) that enables researchers to examine, manage, manipulate, and use the information contained within the published neuroscientific literature. The project is built within a multi-level, multi-component framework constructed with the use of software engineering methods that themselves provide code-building functionality for neuroinformaticians. We describe the different software layers of the system. First, we present a hypothetical usage scenario illustrating how NeuroScholar permits users to address large-scale questions in a way that would otherwise be impossible. We do this by applying NeuroScholar to a “real-world” neuroscience question: How is stress-related information processed in the brain? We then explain how the overall design of NeuroScholar enables the system to work and illustrate different components of the user interface. We then describe the knowledge management strategy we use to store interpretations. Finally, we describe the software engineering framework we have devised (called the “View-Primitive-Data Model framework,” [VPDMf]) to provide an open-source, accelerated software development environment for the project. We believe that NeuroScholar will be useful to experimental neuroscientists by helping them interact with the primary neuroscientific literature in a meaningful way, and to neuroinformaticians by providing them with useful, affordable software engineering tools. PMID:15055395
Chen, Hai; Liang, Xiaoying; Li, Rui
2013-01-01
Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.
Multi-User Hardware Solutions to Combustion Science ISS Research
NASA Technical Reports Server (NTRS)
Otero, Angel M.
2001-01-01
In response to the budget environment and to expand on the International Space Station (ISS) Fluids and Combustion Facility (FCF) Combustion Integrated Rack (CIR), common hardware approach, the NASA Combustion Science Program shifted focus in 1999 from single investigator PI (Principal Investigator)-specific hardware to multi-user 'Minifacilities'. These mini-facilities would take the CIR common hardware philosophy to the next level. The approach that was developed re-arranged all the investigations in the program into sub-fields of research. Then common requirements within these subfields were used to develop a common system that would then be complemented by a few PI-specific components. The sub-fields of research selected were droplet combustion, solids and fire safety, and gaseous fuels. From these research areas three mini-facilities have sprung: the Multi-user Droplet Combustion Apparatus (MDCA) for droplet research, Flow Enclosure for Novel Investigations in Combustion of Solids (FEANICS) for solids and fire safety, and the Multi-user Gaseous Fuels Apparatus (MGFA) for gaseous fuels. These mini-facilities will develop common Chamber Insert Assemblies (CIA) and diagnostics for the respective investigators complementing the capability provided by CIR. Presently there are four investigators for MDCA, six for FEANICS, and four for MGFA. The goal of these multi-user facilities is to drive the cost per PI down after the initial development investment is made. Each of these mini-facilities will become a fixture of future Combustion Science NASA Research Announcements (NRAs), enabling investigators to propose against an existing capability. Additionally, an investigation is provided the opportunity to enhance the existing capability to bridge the gap between the capability and their specific science requirements. This multi-user development approach will enable the Combustion Science Program to drive cost per investigation down while drastically reducing the time required to go from selection to space flight.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
Huang, Chun; Zhang, Jin; Young, Neil P.; Snaith, Henry J.; Grant, Patrick S.
2016-01-01
Supercapacitors are in demand for short-term electrical charge and discharge applications. Unlike conventional supercapacitors, solid-state versions have no liquid electrolyte and do not require robust, rigid packaging for containment. Consequently they can be thinner, lighter and more flexible. However, solid-state supercapacitors suffer from lower power density and where new materials have been developed to improve performance, there remains a gap between promising laboratory results that usually require nano-structured materials and fine-scale processing approaches, and current manufacturing technology that operates at large scale. We demonstrate a new, scalable capability to produce discrete, multi-layered electrodes with a different material and/or morphology in each layer, and where each layer plays a different, critical role in enhancing the dynamics of charge/discharge. This layered structure allows efficient utilisation of each material and enables conservative use of hard-to-obtain materials. The layered electrode shows amongst the highest combinations of energy and power densities for solid-state supercapacitors. Our functional design and spray manufacturing approach to heterogeneous electrodes provide a new way forward for improved energy storage devices. PMID:27161379
The Automated Geospatial Watershed Assessment (AGWA) tool is a desktop application that uses widely available standardized spatial datasets to derive inputs for multi-scale hydrologic models (Miller et al., 2007). The required data sets include topography (DEM data), soils, clima...
What if we took a global look?
NASA Astrophysics Data System (ADS)
Ouellet Dallaire, C.; Lehner, B.
2014-12-01
Freshwater resources are facing unprecedented pressures. In hope to cope with this, Environmental Hydrology, Freshwater Biology, and Fluvial Geomorphology have defined conceptual approaches such as "environmental flow requirements", "instream flow requirements" or "normative flow regime" to define appropriate flow regime to maintain a given ecological status. These advances in the fields of freshwater resources management are asking scientists to create bridges across disciplines. Holistic and multi-scales approaches are becoming more and more common in water sciences research. The intrinsic nature of river systems demands these approaches to account for the upstream-downstream link of watersheds. Before recent technological developments, large scale analyses were cumbersome and, often, the necessary data was unavailable. However, new technologies, both for information collection and computing capacity, enable a high resolution look at the global scale. For rivers around the world, this new outlook is facilitated by the hydrologically relevant geo-spatial database HydroSHEDS. This database now offers more than 24 millions of kilometers of rivers, some never mapped before, at the click of a fingertip. Large and, even, global scale assessments can now be used to compare rivers around the world. A river classification framework was developed using HydroSHEDS called GloRiC (Global River Classification). This framework advocates for holistic approach to river systems by using sub-classifications drawn from six disciplines related to river sciences: Hydrology, Physiography and climate, Geomorphology, Chemistry, Biology and Human impact. Each of these disciplines brings complementary information on the rivers that is relevant at different scales. A first version of a global river reach classification was produced at the 500m resolution. Variables used in the classification have influence on processes involved at different scales (ex. topography index vs. pH). However, all variables are computed at the same high spatial resolution. This way, we can have a global look at local phenomenon.
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-01-01
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-04-21
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.
A Multi-Scale Comparative Study of Shape and Sprawl in Metropolitan Regions of the United States
ERIC Educational Resources Information Center
Kugler, Tracy A.
2012-01-01
This dissertation constitutes a multi-scale quantitative and qualitative investigation of patterns of urban development in metropolitan regions of the United States. This work has generated a comprehensive data set on spatial patterns of metropolitan development in the U.S. and an approach to the study of such patterns that can be used to further…
Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth
2014-01-01
A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...
Collaborative mining and interpretation of large-scale data for biomedical research insights.
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence.
Collaborative Mining and Interpretation of Large-Scale Data for Biomedical Research Insights
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence. PMID:25268270
Bio-inspired Murray materials for mass transfer and activity
Zheng, Xianfeng; Shen, Guofang; Wang, Chao; Li, Yu; Dunphy, Darren; Hasan, Tawfique; Brinker, C. Jeffrey; Su, Bao-Lian
2017-01-01
Both plants and animals possess analogous tissues containing hierarchical networks of pores, with pore size ratios that have evolved to maximize mass transport and rates of reactions. The underlying physical principles of this optimized hierarchical design are embodied in Murray's law. However, we are yet to realize the benefit of mimicking nature's Murray networks in synthetic materials due to the challenges in fabricating vascularized structures. Here we emulate optimum natural systems following Murray's law using a bottom-up approach. Such bio-inspired materials, whose pore sizes decrease across multiple scales and finally terminate in size-invariant units like plant stems, leaf veins and vascular and respiratory systems provide hierarchical branching and precise diameter ratios for connecting multi-scale pores from macro to micro levels. Our Murray material mimics enable highly enhanced mass exchange and transfer in liquid–solid, gas–solid and electrochemical reactions and exhibit enhanced performance in photocatalysis, gas sensing and as Li-ion battery electrodes. PMID:28382972
VIRTIS on Venus Express: retrieval of real surface emissivity on global scales
NASA Astrophysics Data System (ADS)
Arnold, Gabriele E.; Kappel, David; Haus, Rainer; Telléz Pedroza, Laura; Piccioni, Giuseppe; Drossart, Pierre
2015-09-01
The extraction of surface emissivity data provides the data base for surface composition analyses and enables to evaluate Venus' geology. The Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express mission measured, inter alia, the nightside thermal emission of Venus in the near infrared atmospheric windows between 1.0 and 1.2 μm. These data can be used to determine information about surface properties on global scales. This requires a sophisticated approach to understand and consider the effects and interferences of different atmospheric and surface parameters influencing the retrieved values. In the present work, results of a new technique for retrieval of the 1.0 - 1.2 μm - surface emissivity are summarized. It includes a Multi-Window Retrieval Technique, a Multi-Spectrum Retrieval technique (MSR), and a detailed reliability analysis. The MWT bases on a detailed radiative transfer model making simultaneous use of information from different atmospheric windows of an individual spectrum. MSR regularizes the retrieval by incorporating available a priori mean values, standard deviations as well as spatial-temporal correlations of parameters to be retrieved. The capability of this method is shown for a selected surface target area. Implications for geologic investigations are discussed. Based on these results, the work draws conclusions for future Venus surface composition analyses on global scales using spectral remote sensing techniques. In that context, requirements for observational scenarios and instrumental performances are investigated, and recommendations are derived to optimize spectral measurements for Venus' surface studies.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe
2017-04-01
Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.
Magesa, Stephen M; Lengeler, Christian; deSavigny, Don; Miller, Jane E; Njau, Ritha JA; Kramer, Karen; Kitua, Andrew; Mwita, Alex
2005-01-01
Introduction Malaria is the largest cause of health services attendance, hospital admissions and child deaths in Tanzania. At the Abuja Summit in April 2000 Tanzania committed itself to protect 60% of its population at high risk of malaria by 2005. The country is, therefore, determined to ensure that sustainable malaria control using insecticide-treated nets is carried out on a national scale. Case description Tanzania has been involved for two decades in the research process for developing insecticide-treated nets as a malaria control tool, from testing insecticides and net types, to assessing their efficacy and effectiveness, and exploring new ways of distribution. Since 2000, the emphasis has changed from a project approach to that of a concerted multi-stakeholder action for taking insecticide-treated nets to national scale (NATNETS). This means creating conditions that make insecticide-treated nets accessible and affordable to all those at risk of malaria in the country. This paper describes Tanzania's experience in (1) creating an enabling environment for insecticide-treated nets scale-up, (2) promoting the development of a commercial sector for insecticide-treated nets, and (3) targeting pregnant women with highly subsidized insecticide-treated nets through a national voucher scheme. As a result, nearly 2 million insecticide-treated nets and 2.2 million re-treatment kits were distributed in 2004. Conclusion National upscaling of insecticide-treated nets is possible when the programme is well designed, coordinated and supported by committed stakeholders; the Abuja target of protecting 60% of those at high risk is feasible, even for large endemic countries. PMID:16042780
Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species
Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin
1999-01-01
The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...
SDSM-DC: A smarter approach to downscaling for decision-making? (Invited)
NASA Astrophysics Data System (ADS)
Wilby, R. L.; Dawson, C. W.
2011-12-01
General Circulation Model (GCM) output has been used for downscaling and impact assessments for at least 25 years. Downscaling methods raise awareness about risks posed by climate variability and change to human and natural systems. However, there are relatively few instances where these analyses have translated into actionable information for adaptation. One reason is that conventional ';top down' downscaling typically yields very large uncertainty bounds in projected impacts at regional and local scales. Consequently, there are growing calls to use downscaling tools in smarter ways that refocus attention on the decision problem rather than on the climate modelling per se. The talk begins with an overview of various application of the Statistical DownScaling Model (SDSM) over the last decade. This sample offers insights to downscaling practice in terms of regions and sectors of interest, modes of application and adaptation outcomes. The decision-centred rationale and functionality of the latest version of SDSM is then explained. This new downscaling tool does not require GCM input but enables the user to generate plausible daily weather scenarios that may be informed by climate model and/or palaeoenvironmental information. Importantly, the tool is intended for stress-testing adaptation options rather than for exhaustive analysis of uncertainty components. The approach is demonstrated by downscaling multi-basin, multi-elevation temperature and precipitation scenarios for the Upper Colorado River Basin. These scenarios are used alongside other narratives of future conditions that might potential affect the security of water supplies, and for evaluating steps that can be taken to manage these risks.
SDSM-DC: A smarter approach to downscaling for decision-making? (Invited)
NASA Astrophysics Data System (ADS)
Wilby, R. L.; Dawson, C. W.
2013-12-01
General Circulation Model (GCM) output has been used for downscaling and impact assessments for at least 25 years. Downscaling methods raise awareness about risks posed by climate variability and change to human and natural systems. However, there are relatively few instances where these analyses have translated into actionable information for adaptation. One reason is that conventional ';top down' downscaling typically yields very large uncertainty bounds in projected impacts at regional and local scales. Consequently, there are growing calls to use downscaling tools in smarter ways that refocus attention on the decision problem rather than on the climate modelling per se. The talk begins with an overview of various application of the Statistical DownScaling Model (SDSM) over the last decade. This sample offers insights to downscaling practice in terms of regions and sectors of interest, modes of application and adaptation outcomes. The decision-centred rationale and functionality of the latest version of SDSM is then explained. This new downscaling tool does not require GCM input but enables the user to generate plausible daily weather scenarios that may be informed by climate model and/or palaeoenvironmental information. Importantly, the tool is intended for stress-testing adaptation options rather than for exhaustive analysis of uncertainty components. The approach is demonstrated by downscaling multi-basin, multi-elevation temperature and precipitation scenarios for the Upper Colorado River Basin. These scenarios are used alongside other narratives of future conditions that might potential affect the security of water supplies, and for evaluating steps that can be taken to manage these risks.
Chemical Engineering of Photoactivity in Heterometallic Titanium-Organic Frameworks by Metal Doping.
Castells-Gil, Javier; Padial, Natalia M; Almora-Barrios, Neyvis; Albero, Josep; Ruiz-Salvador, A Rabdel; González-Platas, Javier; García, Hermenegildo; Martí-Gastaldo, Carlos
2018-06-06
We report a new family of titanium-organic frameworks that enlarges the limited number of crystalline, porous materials available for this metal. They are chemically robust and can be prepared as single crystals at multi-gram scale from multiple precursors. Their heterometallic structure enables engineering of their photoactivity by metal doping rather than by linker functionalization. Compared to other methodologies based on the post-synthetic metallation of MOFs, our approach is well-fitted for controlling the positioning of dopants at an atomic level to gain more precise control over the band-gap and electronic properties of the porous solid. Changes in the band-gap are also rationalized with computational modelling and experimentally confirmed by photocatalytic H 2 production. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulating multiprimary LCDs on standard tri-stimulus LC displays
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Vonneilich, Katrin; Bonse, Thomas
2008-01-01
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the performance through mapping the multi-primary design back onto a RGB LC display gamut without building a prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one needs to design a multiprimary display for LCD TV applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Arumugam, S.
2017-12-01
Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior under varied global and local scale climatic influences from the developed BHMM.
NASA Astrophysics Data System (ADS)
Piqué, Alberto; Auyeung, Raymond C. Y.; Kim, Heungsoo; Charipar, Nicholas A.; Mathews, Scott A.
2016-06-01
Laser-based materials processing techniques are gaining widespread use in micro-manufacturing applications. The use of laser microfabrication techniques enables the processing of micro- and nanostructures from a wide range of materials and geometries without the need for masking and etching steps commonly associated with photolithography. This review aims to describe the broad applications space covered by laser-based micro- and nanoprocessing techniques and the benefits offered by the use of lasers in micro-manufacturing processes. Given their non-lithographic nature, these processes are also referred to as laser direct-write and constitute some of the earliest demonstrations of 3D printing or additive manufacturing at the microscale. As this review will show, the use of lasers enables precise control of the various types of processing steps—from subtractive to additive—over a wide range of scales with an extensive materials palette. Overall, laser-based direct-write techniques offer multiple modes of operation including the removal (via ablative processes) and addition (via photopolymerization or printing) of most classes of materials using the same equipment in many cases. The versatility provided by these multi-function, multi-material and multi-scale laser micro-manufacturing processes cannot be matched by photolithography nor with other direct-write microfabrication techniques and offer unique opportunities for current and future 3D micro-manufacturing applications.
2013-01-01
Background Zoonoses are a growing international threat interacting at the human-animal-environment interface and call for transdisciplinary and multi-sectoral approaches in order to achieve effective disease management. The recent emergence of Lyme disease in Quebec, Canada is a good example of a complex health issue for which the public health sector must find protective interventions. Traditional preventive and control interventions can have important environmental, social and economic impacts and as a result, decision-making requires a systems approach capable of integrating these multiple aspects of interventions. This paper presents the results from a study of a multi-criteria decision analysis (MCDA) approach for the management of Lyme disease in Quebec, Canada. MCDA methods allow a comparison of interventions or alternatives based on multiple criteria. Methods MCDA models were developed to assess various prevention and control decision criteria pertinent to a comprehensive management of Lyme disease: a first model was developed for surveillance interventions and a second was developed for control interventions. Multi-criteria analyses were conducted under two epidemiological scenarios: a disease emergence scenario and an epidemic scenario. Results In general, we observed a good level of agreement between stakeholders. For the surveillance model, the three preferred interventions were: active surveillance of vectors by flagging or dragging, active surveillance of vectors by trapping of small rodents and passive surveillance of vectors of human origin. For the control interventions model, basic preventive communications, human vaccination and small scale landscaping were the three preferred interventions. Scenarios were found to only have a small effect on the group ranking of interventions in the control model. Conclusions MCDA was used to structure key decision criteria and capture the complexity of Lyme disease management. This facilitated the identification of gaps in the scientific literature and enabled a clear identification of complementary interventions that could be used to improve the relevance and acceptability of proposed prevention and control strategy. Overall, MCDA presents itself as an interesting systematic approach for public health planning and zoonoses management with a “One Health” perspective. PMID:24079303
Aenishaenslin, Cécile; Hongoh, Valérie; Cissé, Hassane Djibrilla; Hoen, Anne Gatewood; Samoura, Karim; Michel, Pascal; Waaub, Jean-Philippe; Bélanger, Denise
2013-09-30
Zoonoses are a growing international threat interacting at the human-animal-environment interface and call for transdisciplinary and multi-sectoral approaches in order to achieve effective disease management. The recent emergence of Lyme disease in Quebec, Canada is a good example of a complex health issue for which the public health sector must find protective interventions. Traditional preventive and control interventions can have important environmental, social and economic impacts and as a result, decision-making requires a systems approach capable of integrating these multiple aspects of interventions. This paper presents the results from a study of a multi-criteria decision analysis (MCDA) approach for the management of Lyme disease in Quebec, Canada. MCDA methods allow a comparison of interventions or alternatives based on multiple criteria. MCDA models were developed to assess various prevention and control decision criteria pertinent to a comprehensive management of Lyme disease: a first model was developed for surveillance interventions and a second was developed for control interventions. Multi-criteria analyses were conducted under two epidemiological scenarios: a disease emergence scenario and an epidemic scenario. In general, we observed a good level of agreement between stakeholders. For the surveillance model, the three preferred interventions were: active surveillance of vectors by flagging or dragging, active surveillance of vectors by trapping of small rodents and passive surveillance of vectors of human origin. For the control interventions model, basic preventive communications, human vaccination and small scale landscaping were the three preferred interventions. Scenarios were found to only have a small effect on the group ranking of interventions in the control model. MCDA was used to structure key decision criteria and capture the complexity of Lyme disease management. This facilitated the identification of gaps in the scientific literature and enabled a clear identification of complementary interventions that could be used to improve the relevance and acceptability of proposed prevention and control strategy. Overall, MCDA presents itself as an interesting systematic approach for public health planning and zoonoses management with a "One Health" perspective.
A scale-based approach to interdisciplinary research and expertise in sports.
Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles
2017-02-01
After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.
Biocultural approaches to well-being and sustainability indicators across scales
Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter
2017-01-01
Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...
Multi-scale symbolic transfer entropy analysis of EEG
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-10-01
From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Reaching Mars: multi-criteria R&D portfolio selection for Mars exploration technology planning
NASA Technical Reports Server (NTRS)
Smith, J. H.; Dolgin, B. P.; Weisbin, C. R.
2003-01-01
The exploration of Mars has been the focus of increasing scientific interest about the planet and its relationship to Earth. A multi-criteria decision-making approach was developed to address the question, Given a Mars program composed of mission concepts dependent on a variety of alternative technology development programs, which combination of technologies would enable missions to maximize science return under a constrained budget?.
Multiscale modeling of a low magnetostrictive Fe-27wt%Co-0.5wt%Cr alloy
NASA Astrophysics Data System (ADS)
Savary, M.; Hubert, O.; Helbert, A. L.; Baudin, T.; Batonnet, R.; Waeckerlé, T.
2018-05-01
The present paper deals with the improvement of a multi-scale approach describing the magneto-mechanical coupling of Fe-27wt%Co-0.5wt%Cr alloy. The magnetostriction behavior is demonstrated as very different (low magnetostriction vs. high magnetostriction) when this material is submitted to two different final annealing conditions after cold rolling. The numerical data obtained from a multi-scale approach are in accordance with experimental data corresponding to the high magnetostriction level material. A bi-domain structure hypothesis is employed to explain the low magnetostriction behavior, in accordance with the effect of an applied tensile stress. A modification of the multiscale approach is proposed to match this result.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Optical interconnect for large-scale systems
NASA Astrophysics Data System (ADS)
Dress, William
2013-02-01
This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.
Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”
Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; ...
2015-04-17
Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orphan, Victoria; Tyson, Gene; Meile, Christof
The global biological CH4 cycle is largely controlled through coordinated and often intimate microbial interactions between archaea and bacteria, the majority of which are still unknown or have been only cursorily identified. Members of the methanotrophic archaea, aka ‘ANME’, are believed to play a major role in the cycling of methane in anoxic environments coupled to sulfate, nitrate, and possibly iron and manganese oxides, frequently forming diverse physical and metabolic partnerships with a range of bacteria. The thermodynamic challenges overcome by the ANME and their bacterial partners and corresponding slow rates of growth are common characteristics in anaerobic ecosystems, and,more » in stark contrast to most cultured microorganisms, this type of energy and resource limited microbial lifestyle is likely the norm in the environment. While we have gained an in-depth systems level understanding of fast-growing, energy-replete microorganisms, comparatively little is known about the dynamics of cell respiration, growth, protein turnover, gene expression, and energy storage in the slow-growing microbial majority. These fundamental properties, combined with the observed metabolic and symbiotic versatility of methanotrophic ANME, make these cooperative microbial systems a relevant (albeit challenging) system to study and for which to develop and optimize culture-independent methodologies, which enable a systems-level understanding of microbial interactions and metabolic networks. We used an integrative systems biology approach to study anaerobic sediment microcosms and methane-oxidizing bioreactors and expanded our understanding of the methanotrophic ANME archaea, their interactions with physically-associated bacteria, ecophysiological characteristics, and underlying genetic basis for cooperative microbial methane-oxidation linked with different terminal electron acceptors. Our approach is inherently multi-disciplinary and multi-scaled, combining transcriptional and proteomic analyses with high resolution microscopy techniques, and stable isotopic and chemical analyses that span community level ‘omics investigations (cm scale) to interspecies consortia (µm scale), to the individual cell and its subcellular components (nm scale). We have organized our methodological approach into three broad categories, RNA-based, Protein-targeted and Geochemical, each encompassing a range of scales, with many techniques and resulting datasets that are highly complementary with one another, and together, offer a unique systems-level perspective of methane-based microbial interactions.« less
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.
Basu, Niladri; Renne, Elisha P; Long, Rachel N
2015-09-17
Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.
An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana
Basu, Niladri; Renne, Elisha P.; Long, Rachel N.
2015-01-01
Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally. PMID:26393627
Towards an eco-phylogenetic framework for infectious disease ecology.
Fountain-Jones, Nicholas M; Pearse, William D; Escobar, Luis E; Alba-Casals, Ana; Carver, Scott; Davies, T Jonathan; Kraberger, Simona; Papeş, Monica; Vandegrift, Kurt; Worsley-Tonks, Katherine; Craft, Meggan E
2018-05-01
Identifying patterns and drivers of infectious disease dynamics across multiple scales is a fundamental challenge for modern science. There is growing awareness that it is necessary to incorporate multi-host and/or multi-parasite interactions to understand and predict current and future disease threats better, and new tools are needed to help address this task. Eco-phylogenetics (phylogenetic community ecology) provides one avenue for exploring multi-host multi-parasite systems, yet the incorporation of eco-phylogenetic concepts and methods into studies of host pathogen dynamics has lagged behind. Eco-phylogenetics is a transformative approach that uses evolutionary history to infer present-day dynamics. Here, we present an eco-phylogenetic framework to reveal insights into parasite communities and infectious disease dynamics across spatial and temporal scales. We illustrate how eco-phylogenetic methods can help untangle the mechanisms of host-parasite dynamics from individual (e.g. co-infection) to landscape scales (e.g. parasite/host community structure). An improved ecological understanding of multi-host and multi-pathogen dynamics across scales will increase our ability to predict disease threats. © 2017 Cambridge Philosophical Society.
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
NASA Astrophysics Data System (ADS)
Joyce, Steven; Hartley, Lee; Applegate, David; Hoek, Jaap; Jackson, Peter
2014-09-01
Forsmark in Sweden has been proposed as the site of a geological repository for spent high-level nuclear fuel, to be located at a depth of approximately 470 m in fractured crystalline rock. The safety assessment for the repository has required a multi-disciplinary approach to evaluate the impact of hydrogeological and hydrogeochemical conditions close to the repository and in a wider regional context. Assessing the consequences of potential radionuclide releases requires quantitative site-specific information concerning the details of groundwater flow on the scale of individual waste canister locations (1-10 m) as well as details of groundwater flow and composition on the scale of groundwater pathways between the facility and the surface (500 m to 5 km). The purpose of this article is to provide an illustration of multi-scale modeling techniques and the results obtained when combining aspects of local-scale flows in fractures around a potential contaminant source with regional-scale groundwater flow and transport subject to natural evolution of the system. The approach set out is novel, as it incorporates both different scales of model and different levels of detail, combining discrete fracture network and equivalent continuous porous medium representations of fractured bedrock.
Programmable snapping composites with bio-inspired architecture.
Schmied, Jascha U; Le Ferrand, Hortense; Ermanni, Paolo; Studart, André R; Arrieta, Andres F
2017-03-13
The development of programmable self-shaping materials enables the onset of new and innovative functionalities in many application fields. Commonly, shape adaptation is achieved by exploiting diffusion-driven swelling or nano-scale phase transition, limiting the change of shape to slow motion predominantly determined by the environmental conditions and/or the materials specificity. To address these shortcomings, we report shape adaptable programmable shells that undergo morphing via a snap-through mechanism inspired by the Dionaea muscipula leaf, known as the Venus fly trap. The presented shells are composite materials made of epoxy reinforced by stiff anisotropic alumina micro-platelets oriented in specific directions. By tailoring the microstructure via magnetically-driven alignment of the platelets, we locally tune the pre-strain and stiffness anisotropy of the composite. This novel approach enables the fabrication of complex shapes showing non-orthotropic curvatures and stiffness gradients, radically extending the design space when compared to conventional long-fibre reinforced multi-stable composites. The rare combination of large stresses, short actuation times and complex shapes, results in hinge-free artificial shape adaptable systems with large design freedom for a variety of morphing applications.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Multi-petascale highly efficient parallel supercomputer
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng
2015-07-14
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.
Mirrored continuum and molecular scale simulations of the ignition of high-pressure phases of RDX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kibaek; Stewart, D. Scott, E-mail: santc@illinois.edu, E-mail: dss@illinois.edu; Joshi, Kaushik
2016-05-14
We present a mirrored atomistic and continuum framework that is used to describe the ignition of energetic materials, and a high-pressure phase of RDX in particular. The continuum formulation uses meaningful averages of thermodynamic properties obtained from the atomistic simulation and a simplification of enormously complex reaction kinetics. In particular, components are identified based on molecular weight bin averages and our methodology assumes that both the averaged atomistic and continuum simulations are represented on the same time and length scales. The atomistic simulations of thermally initiated ignition of RDX are performed using reactive molecular dynamics (RMD). The continuum model ismore » based on multi-component thermodynamics and uses a kinetics scheme that describes observed chemical changes of the averaged atomistic simulations. Thus the mirrored continuum simulations mimic the rapid change in pressure, temperature, and average molecular weight of species in the reactive mixture. This mirroring enables a new technique to simplify the chemistry obtained from reactive MD simulations while retaining the observed features and spatial and temporal scales from both the RMD and continuum model. The primary benefit of this approach is a potentially powerful, but familiar way to interpret the atomistic simulations and understand the chemical events and reaction rates. The approach is quite general and thus can provide a way to model chemistry based on atomistic simulations and extend the reach of those simulations.« less
Techniques for Mapping Synthetic Aperture Radar Processing Algorithms to Multi-GPU Clusters
2012-12-01
Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output...Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output
Probabilistic, meso-scale flood loss modelling
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
A graph-based approach for the retrieval of multi-modality medical images.
Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan
2014-02-01
In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects. Copyright © 2013 Elsevier B.V. All rights reserved.
Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs
NASA Astrophysics Data System (ADS)
Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung
2014-05-01
The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object-based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.
MEMS-Based Communications Systems for Space-Based Applications
NASA Technical Reports Server (NTRS)
DeLosSantos, Hector J.; Brunner, Robert A.; Lam, Juan F.; Hackett, Le Roy H.; Lohr, Ross F., Jr.; Larson, Lawrence E.; Loo, Robert Y.; Matloubian, Mehran; Tangonan, Gregory L.
1995-01-01
As user demand for higher capacity and flexibility in communications satellites increases, new ways to cope with the inherent limitations posed by the prohibitive mass and power consumption, needed to satisfy those requirements, are under investigation. Recent studies suggest that while new satellite architectures are necessary to enable multi-user, multi-data rate, multi-location satellite links, these new architectures will inevitably increase power consumption, and in turn, spacecraft mass, to such an extent that their successful implementation will demand novel lightweight/low power hardware approaches. In this paper, following a brief introduction to the fundamentals of communications satellites, we address the impact of micro-electro-mechanical systems (MEMS) technology, in particular micro-electro-mechanical (MEM) switches to mitigate the above mentioned problems and show that low-loss/wide bandwidth MEM switches will go a long way towards enabling higher capacity and flexibility space-based communications systems.
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
Remote focusing for programmable multi-layer differential multiphoton microscopy
Hoover, Erich E.; Young, Michael D.; Chandler, Eric V.; Luo, Anding; Field, Jeffrey J.; Sheetz, Kraig E.; Sylvester, Anne W.; Squier, Jeff A.
2010-01-01
We present the application of remote focusing to multiphoton laser scanning microscopy and utilize this technology to demonstrate simultaneous, programmable multi-layer imaging. Remote focusing is used to independently control the axial location of multiple focal planes that can be simultaneously imaged with single element detection. This facilitates volumetric multiphoton imaging in scattering specimens and can be practically scaled to a large number of focal planes. Further, it is demonstrated that the remote focusing control can be synchronized with the lateral scan directions, enabling imaging in orthogonal scan planes. PMID:21326641
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra
2014-01-01
During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077
Yorgancıoğlu, Ayşe Arzu; Kalaycı, Ömer; Cingi, Cemal; Gemicioğlu, Bilun; Kalyoncu, Ali Fuat; Agache, Iogana; Bachert, Claus; Bedbrook, Anna; Canonica, George Walter; Casale, Thomas; Cruz, Alvaro; Fokkens, Wytsk Ej; Hellings, Peter; Samolinski, Boleslaw; Bousquet, Jean
2017-03-01
The Allergic Rhinitis and its Impact on Asthma (ARIA) initiative commenced during a World Health Organization (WHO) workshop in 1999. The initial goals were (i) to propose a new allergic rhinitis classification, (ii) to promote the concept of multi-morbidity in asthma and rhinitis and (iii) to develop guidelines with all stakeholders for global use in all countries and populations. ARIA- disseminated and implemented in over 70 countries globally- is now focusing on the implementation of emerging technologies for individualized and predictive medicine. MASK (MACVIA (Contre les MAladies Chroniques pour un VIeillissement Actif)-ARIA Sentinel NetworK) uses mobile technology to develop care pathways in order to enable the management of rhinitis and asthma by a multi-disciplinary group or by patients themselves. An App (Android and iOS) is available in 20 countries and 15 languages. It uses a visual analogue scale to assess symptom control and work productivity as well as a clinical decision support system. It is associated with an inter-operable tablet for physicians and other health care professionals. The scaling up strategy uses the recommendations of the European Innovation Partnership on Active and Healthy Ageing. The aim of the novel ARIA approach is to provide an active and healthy life to rhinitis sufferers, whatever their age, sex or socio-economic status, in order to reduce health and social inequalities incurred by the disease.
Knoch, Tobias A; Wachsmuth, Malte; Kepper, Nick; Lesnussa, Michael; Abuseiris, Anis; Ali Imam, A M; Kolovos, Petros; Zuin, Jessica; Kockx, Christel E M; Brouwer, Rutger W W; van de Werken, Harmen J G; van IJcken, Wilfred F J; Wendt, Kerstin S; Grosveld, Frank G
2016-01-01
The dynamic three-dimensional chromatin architecture of genomes and its co-evolutionary connection to its function-the storage, expression, and replication of genetic information-is still one of the central issues in biology. Here, we describe the much debated 3D architecture of the human and mouse genomes from the nucleosomal to the megabase pair level by a novel approach combining selective high-throughput high-resolution chromosomal interaction capture ( T2C ), polymer simulations, and scaling analysis of the 3D architecture and the DNA sequence. The genome is compacted into a chromatin quasi-fibre with ~5 ± 1 nucleosomes/11 nm, folded into stable ~30-100 kbp loops forming stable loop aggregates/rosettes connected by similar sized linkers. Minor but significant variations in the architecture are seen between cell types and functional states. The architecture and the DNA sequence show very similar fine-structured multi-scaling behaviour confirming their co-evolution and the above. This architecture, its dynamics, and accessibility, balance stability and flexibility ensuring genome integrity and variation enabling gene expression/regulation by self-organization of (in)active units already in proximity. Our results agree with the heuristics of the field and allow "architectural sequencing" at a genome mechanics level to understand the inseparable systems genomic properties.
MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models
Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko
2012-01-01
Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111
Acceleration of objects to high velocity by electromagnetic forces
Post, Richard F
2017-02-28
Two exemplary approaches to the acceleration of projectiles are provided. Both approaches can utilize concepts associated with the Inductrack maglev system. Either of them provides an effective means of accelerating multi-kilogram projectiles to velocities of several kilometers per second, using launchers of order 10 meters in length, thus enabling the acceleration of projectiles to high velocities by electromagnetic forces.
Multi-Scale Approach to Understanding Source-Sink Dynamics of Amphibians
2015-12-01
spotted salamander, A. maculatum) at Fort Leonard Wood (FLW), Missouri. We used a multi-faceted approach in which we combined ecological , genetic...spotted salamander, A. maculatum) at Fort Leonard Wood , Missouri through a combination of intensive ecological field studies, genetic analyses, and...spatial demographic networks to identify optimal locations for wetland construction and restoration. Ecological Applications. Walls, S. C., Ball, L. C
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiali, E-mail: j.zhang@mpie.de; Morsdorf, Lutz, E-mail: l.morsdorf@mpie.de; Tasan, Cemal Cem, E-mail: c.tasan@mpie.de
In-situ scanning electron microscopy observations of the microstructure evolution during heat treatments are increasingly demanded due to the growing number of alloys with complex microstructures. Post-mortem characterization of the as-processed microstructures rarely provides sufficient insight on the exact route of the microstructure formation. On the other hand, in-situ SEM approaches are often limited due to the arising challenges upon using an in-situ heating setup, e.g. in (i) employing different detectors, (ii) preventing specimen surface degradation, or (iii) controlling and measuring the temperature precisely. Here, we explore and expand the capabilities of the “mid-way” solution by step-wise microstructure tracking, ex-situ, atmore » selected steps of heat treatment. This approach circumvents the limitations above, as it involves an atmosphere and temperature well-controlled dilatometer, and high resolution microstructure characterization (using electron channeling contrast imaging, electron backscatter diffraction, atom probe tomography, etc.). We demonstrate the capabilities of this approach by focusing on three cases: (i) nano-scale carbide precipitation during low-temperature tempering of martensitic steels, (ii) formation of transformation-induced geometrically necessary dislocations in a dual-phase steel during intercritical annealing, and (iii) the partial recrystallization of a metastable β-Ti alloy. - Highlights: • A multi-probe method to track microstructures during heat treatment is developed. • It enables the analysis of various complex phenomena, even those at atomistic scale. • It circumvents some of the free surface effects of classical in-situ experiments.« less
Multi-level molecular modelling for plasma medicine
NASA Astrophysics Data System (ADS)
Bogaerts, Annemie; Khosravian, Narjes; Van der Paal, Jonas; Verlackt, Christof C. W.; Yusupov, Maksudbek; Kamaraj, Balu; Neyts, Erik C.
2016-02-01
Modelling at the molecular or atomic scale can be very useful for obtaining a better insight in plasma medicine. This paper gives an overview of different atomic/molecular scale modelling approaches that can be used to study the direct interaction of plasma species with biomolecules or the consequences of these interactions for the biomolecules on a somewhat longer time-scale. These approaches include density functional theory (DFT), density functional based tight binding (DFTB), classical reactive and non-reactive molecular dynamics (MD) and united-atom or coarse-grained MD, as well as hybrid quantum mechanics/molecular mechanics (QM/MM) methods. Specific examples will be given for three important types of biomolecules, present in human cells, i.e. proteins, DNA and phospholipids found in the cell membrane. The results show that each of these modelling approaches has its specific strengths and limitations, and is particularly useful for certain applications. A multi-level approach is therefore most suitable for obtaining a global picture of the plasma-biomolecule interactions.
Multi-scale computation methods: Their applications in lithium-ion battery research and development
NASA Astrophysics Data System (ADS)
Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao
2016-01-01
Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Zapata, Luis E.
2004-12-21
The average power output of a laser is scaled, to first order, by increasing the transverse dimension of the gain medium while increasing the thickness of an index matched light guide proportionately. Strategic facets cut at the edges of the laminated gain medium provide a method by which the pump light introduced through edges of the composite structure is trapped and passes through the gain medium repeatedly. Spontaneous emission escapes the laser volume via these facets. A multi-faceted disk geometry with grooves cut into the thickness of the gain medium is optimized to passively reject spontaneous emission generated within the laser material, which would otherwise be trapped and amplified within the high index composite disk. Such geometry allows the useful size of the laser aperture to be increased, enabling the average laser output power to be scaled.
Tan, Lilong; Yan, Shuhua
2018-01-01
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897
2012-01-01
Background Tremendous progress has been made in the last ten years in reducing morbidity and mortality caused by malaria, in part because of increases in global funding for malaria control and elimination. Today, many countries are striving for malaria elimination. However, a major challenge is the neglect of cross-border and regional initiatives in malaria control and elimination. This paper seeks to better understand Global Fund support for multi-country initiatives. Methods Documents and proposals were extracted and reviewed from two main sources, the Global Fund website and Aidspan.org. Documents and reports from the Global Fund Technical Review Panel, Board, and Secretariat documents such as guidelines and proposal templates were reviewed to establish the type of policies enacted and guidance provided from the Global Fund on multi-country initiatives and applications. From reviewing this information, the researchers created 29 variables according to eight dimensions to use in a review of Round 10 applications. All Round 10 multi-country applications (for HIV, malaria and tuberculosis) and all malaria multi-country applications (6) from Rounds 1 – 10 were extracted from the Global Fund website. A blind review was conducted of Round 10 applications using the 29 variables as a framework, followed by a review of four of the six successful malaria multi-country grant applications from Rounds 1 – 10. Findings During Rounds 3 – 10 of the Global Fund, only 5.8% of grants submitted were for multi-country initiatives. Out of 83 multi-country proposals submitted, 25.3% were approved by the Technical Review Panel (TRP) for funding, compared to 44.9% of single-country applications. The majority of approved multi-country applications were for HIV (76.2%), followed by malaria (19.0%), then tuberculosis (4.8%). TRP recommendations resulted in improvements to application forms, although guidance was generally vague. The in-depth review of Round 10 multi-country proposals showed that applicants described their projects in one of two ways: a regional ‘network approach’ by which benefits are derived from economies of scale or from enhanced opportunities for mutual support and learning or the development of common policies and approaches; or a ‘cross-border’ approach for enabling activities to be more effectively delivered towards border-crossing populations or vectors. In Round 10, only those with a ‘network approach’ were recommended for funding. The Global Fund has only ever approved six malaria multi-country applications. Four approved applications stated strong arguments for a multi-country initiative, combining both ‘cross-border’ and ‘network’ approaches. Conclusion With the cancellation of Round 11 and the proposal that the Global Fund adopt a more targeted and strategic approach to funding, the time is opportune for the Global Fund to develop a clear consensus about the key factors and criteria for funding malaria specific multi-country initiatives. This study found that currently there was a lack of guidance on the key features that a successful multi-country proposal needs to be approved and that applications directed towards the ‘network’ approach were most successful in Round 10. This type of multi-country proposal may favour other diseases such as HIV, whereas the need for malaria control and elimination is different, focusing on cross-border coordination and delivery of interventions to specific groups. The Global Fund should seek to address these issues and give better guidance to countries and regions and investigate disease-specific calls for multi-country and regional applications. PMID:23057734
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
Non-destructive controlled single-particle light scattering measurement
NASA Astrophysics Data System (ADS)
Maconi, G.; Penttilä, A.; Kassamakov, I.; Gritsevich, M.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.
2018-01-01
We present a set of light scattering data measured from a millimeter-sized extraterrestrial rock sample. The data were acquired by our novel scatterometer, which enables accurate multi-wavelength measurements of single-particle samples whose position and orientation are controlled by ultrasonic levitation. The measurements demonstrate a non-destructive approach to derive optical properties of small mineral samples. This enables research on valuable materials, such as those returned from space missions or rare meteorites.
Automated Cooperative Trajectories for a More Efficient and Responsive Air Transportation System
NASA Technical Reports Server (NTRS)
Hanson, Curt
2015-01-01
The NASA Automated Cooperative Trajectories project is developing a prototype avionics system that enables multi-vehicle cooperative control by integrating 1090 MHz ES ADS-B digital communications with onboard autopilot systems. This cooperative control capability will enable meta-aircraft operations for enhanced airspace utilization, as well as improved vehicle efficiency through wake surfing. This briefing describes the objectives and approach to a flight evaluation of this system planned for 2016.
NASA Astrophysics Data System (ADS)
Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.
2016-09-01
Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.
On the formalization of multi-scale and multi-science processes for integrative biology
Díaz-Zuccarini, Vanessa; Pichardo-Almarza, César
2011-01-01
The aim of this work is to introduce the general concept of ‘Bond Graph’ (BG) techniques applied in the context of multi-physics and multi-scale processes. BG modelling has a natural place in these developments. BGs are inherently coherent as the relationships defined between the ‘elements’ of the graph are strictly defined by causality rules and power (energy) conservation. BGs clearly show how power flows between components of the systems they represent. The ‘effort’ and ‘flow’ variables enable bidirectional information flow in the BG model. When the power level of a system is low, BGs degenerate into signal flow graphs in which information is mainly one-dimensional and power is minimal, i.e. they find a natural limitation when dealing with populations of individuals or purely kinetic models, as the concept of energy conservation in these systems is no longer relevant. The aim of this work is twofold: on the one hand, we will introduce the general concept of BG techniques applied in the context of multi-science and multi-scale models and, on the other hand, we will highlight some of the most promising features in the BG methodology by comparing with examples developed using well-established modelling techniques/software that could suggest developments or refinements to the current state-of-the-art tools, by providing a consistent framework from a structural and energetic point of view. PMID:22670211
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Daniel Murphy; Carina Wyborn; Laurie Yung; Daniel R. Williams; Cory Cleveland; Lisa Eby; Solomon Dobrowski; Erin Towler
2016-01-01
Current projections of future climate change foretell potentially transformative ecological changes that threaten communities globally. Using two case studies from the United States Intermountain West, this article highlights the ways in which a better articulation between theory and methods in research design can generate proactive applied tools that enable...
Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques
2013-11-15
Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, S., E-mail: hamann@inp-greifswald.de; Röpcke, J.; Börner, K.
2015-12-15
A laboratory scale plasma nitriding monitoring reactor (PLANIMOR) has been designed to study the basics of active screen plasma nitriding (ASPN) processes. PLANIMOR consists of a tube reactor vessel, made of borosilicate glass, enabling optical emission spectroscopy (OES) and infrared absorption spectroscopy. The linear setup of the electrode system of the reactor has the advantages to apply the diagnostic approaches on each part of the plasma process, separately. Furthermore, possible changes of the electrical field and of the heat generation, as they could appear in down-scaled cylindrical ASPN reactors, are avoided. PLANIMOR has been used for the nitriding of steelmore » samples, achieving similar results as in an industrial scale ASPN reactor. A compact spectrometer using an external cavity quantum cascade laser combined with an optical multi-pass cell has been applied for the detection of molecular reaction products. This allowed the determination of the concentrations of four stable molecular species (CH{sub 4}, C{sub 2}H{sub 2}, HCN, and NH{sub 3}). With the help of OES, the rotational temperature of the screen plasma could be determined.« less
A self-scaling, distributed information architecture for public health, research, and clinical care.
McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D
2007-01-01
This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.
Semantic Representation and Scale-Up of Integrated Air Traffic Management Data
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Ranjan, Shubha; Wei, Mei Y.; Eshow, Michelle M.
2016-01-01
Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.
USDA-ARS?s Scientific Manuscript database
A continuous monitoring of daily evapotranspiration (ET) at field scale can be achieved by combining thermal infrared remote sensing data information from multiple satellite platforms. Here, an integrated approach to field scale ET mapping is described, combining multi-scale surface energy balance e...
Patterson, James J; Smith, Carl; Bellamy, Jennifer
2013-10-15
Nonpoint source (NPS) water pollution in catchments is a 'wicked' problem that threatens water quality, water security, ecosystem health and biodiversity, and thus the provision of ecosystem services that support human livelihoods and wellbeing from local to global scales. However, it is a difficult problem to manage because water catchments are linked human and natural systems that are complex, dynamic, multi-actor, and multi-scalar in nature. This in turn raises questions about understanding and influencing change across multiple levels of planning, decision-making and action. A key challenge in practice is enabling implementation of local management action, which can be influenced by a range of factors across multiple levels. This paper reviews and synthesises important 'enabling' capacities that can influence implementation of local management action, and develops a conceptual framework for understanding and analysing these in practice. Important enabling capacities identified include: history and contingency; institutional arrangements; collaboration; engagement; vision and strategy; knowledge building and brokerage; resourcing; entrepreneurship and leadership; and reflection and adaptation. Furthermore, local action is embedded within multi-scalar contexts and therefore, is highly contextual. The findings highlight the need for: (1) a systemic and integrative perspective for understanding and influencing change for managing the wicked problem of NPS water pollution; and (2) 'enabling' social and institutional arenas that support emergent and adaptive management structures, processes and innovations for addressing NPS water pollution in practice. These findings also have wider relevance to other 'wicked' natural resource management issues facing similar implementation challenges. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...
2017-10-28
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Lessons Learned while Exploring Cloud-Native Architectures for NASA EOSDIS Applications and Systems
NASA Astrophysics Data System (ADS)
Pilone, D.
2016-12-01
As new, high data rate missions begin collecting data, the NASA's Earth Observing System Data and Information System (EOSDIS) archive is projected to grow roughly 20x to over 300PBs by 2025. To prepare for the dramatic increase in data and enable broad scientific inquiry into larger time series and datasets, NASA has been exploring the impact of applying cloud technologies throughout EOSDIS. In this talk we will provide an overview of NASA's prototyping and lessons learned in applying cloud architectures to: Highly scalable and extensible ingest and archive of EOSDIS data Going "all-in" on cloud based application architectures including "serverless" data processing pipelines and evaluating approaches to vendor-lock in Rethinking data distribution and approaches to analysis in a cloud environment Incorporating and enforcing security controls while minimizing the barrier for research efforts to deploy to NASA compliant, operational environments. NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a multi-petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 6000 data products ranging from various types of science disciplines. EOSDIS has continually evolved to improve the discoverability, accessibility, and usability of high-impact NASA data spanning the multi-petabyte-scale archive of Earth science data products.
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
Anticipating flash-floods: Multi-scale aspects of the social response
NASA Astrophysics Data System (ADS)
Lutoff, Céline; Creutin, Jean-Dominique; Ruin, Isabelle; Borga, Marco
2016-10-01
This paper aims at exploring the anticipation phase before a flash flood, corresponding to the time between the first climatic signs and the peak-flow. We focus the analysis on people's behaviors observing how they use this period to organize themselves for facing the event. The analysis is made through the definition of three specific scales: the timeliness scale, an analytical scale of anticipatory actions and the scale of human response network. Using a cross-scale and cross level analysis enables to define different phases in the anticipation period where different kind of environmental precursors are mobilized by the actors in order to make sense of the situation and adapt. Three main points deserve attention at the end: firstly, the concepts of timeliness, anticipatory actions and crisis network scales enable to understand differently what happens both physically and socially during an extreme event; secondly, analyzing the precursors shows that each level of crisis network uses different kinds of signs for estimating the situation, organizing and reacting; thirdly, there is a potential for improvement in observation on both social and physical processes at different scales, for verifying the theory of the anticipatory phases.
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.
2011-12-01
Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.
Bouayed, Naila; Dietrich, Nicolas; Lafforgue, Christine; Lee, Chung-Hak; Guigui, Christelle
2016-01-01
Quorum Quenching (QQ) has been developed over the last few years to overcome practical issues related to membrane biofouling, which is currently the major difficulty thwarting the extensive development of membrane bioreactors (MBRs). QQ is the disruption of Quorum Sensing (QS), cell-to-cell communication enabling the bacteria to harmonize their behavior. The production of biofilm, which is recognized as a major part of the biocake formed on a membrane surface, and which leads to biofouling, has been found to be one of the bacterial behaviors controlled by QS. Since the enzymatic disruption of QS was reported to be efficient as a membrane biofouling mitigation technique in MBRs, the application of QQ to lab-scale MBRs has been the subject of much research using different approaches under different operating conditions. This paper gives an overview of the effectiveness of QQ in mitigating membrane biofouling in MBRs. It is based on the results of previous studies, using two microbial strains, Rhodococcus sp. BH4 and Pseudomonas sp. 1A1. The effect of bacterial QQ on the physical phenomena of the MBR process is analyzed, adopting an original multi-scale approach. Finally, the potential influence of the MBR operating conditions on QQ effectiveness is discussed. PMID:27983578
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Patterns of Risk Using an Integrated Spatial Multi-Hazard Model (PRISM Model)
Multi-hazard risk assessment has long centered on small scale needs, whereby a single community or group of communities’ exposures are assessed to determine potential mitigation strategies. While this approach has advanced the understanding of hazard interactions, it is li...
NASA Astrophysics Data System (ADS)
Haavisto, Sanna; Cardona, Maria J.; Salmela, Juha; Powell, Robert L.; McCarthy, Michael J.; Kataja, Markku; Koponen, Antti I.
2017-11-01
A hybrid multi-scale velocimetry method utilizing Doppler optical coherence tomography in combination with either magnetic resonance imaging or ultrasound velocity profiling is used to investigate pipe flow of four rheologically different working fluids under varying flow regimes. These fluids include water, an aqueous xanthan gum solution, a softwood fiber suspension, and a microfibrillated cellulose suspension. The measurement setup enables not only the analysis of the rheological (bulk) behavior of a studied fluid but gives simultaneously information on their wall layer dynamics, both of which are needed for analyzing and solving practical fluid flow-related problems. Preliminary novel results on rheological and boundary layer flow properties of the working fluids are reported and the potential of the hybrid measurement setup is demonstrated.
Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P
2016-03-01
To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.
Catley, Christina; McGregor, Carolyn; Percival, Jennifer; Curry, Joanne; James, Andrew
2008-01-01
This paper presents a multi-dimensional approach to knowledge translation, enabling results obtained from a survey evaluating the uptake of Information Technology within Neonatal Intensive Care Units to be translated into knowledge, in the form of health informatics capacity audits. Survey data, having multiple roles, patient care scenarios, levels, and hospitals, is translated using a structured data modeling approach, into patient journey models. The data model is defined such that users can develop queries to generate patient journey models based on a pre-defined Patient Journey Model architecture (PaJMa). PaJMa models are then analyzed to build capacity audits. Capacity audits offer a sophisticated view of health informatics usage, providing not only details of what IT solutions a hospital utilizes, but also answering the questions: when, how and why, by determining when the IT solutions are integrated into the patient journey, how they support the patient information flow, and why they improve the patient journey.
Multi-focal multiphoton lithography.
Ritschdorff, Eric T; Nielson, Rex; Shear, Jason B
2012-03-07
Multiphoton lithography (MPL) provides unparalleled capabilities for creating high-resolution, three-dimensional (3D) materials from a broad spectrum of building blocks and with few limitations on geometry, qualities that have been key to the design of chemically, mechanically, and biologically functional microforms. Unfortunately, the reliance of MPL on laser scanning limits the speed at which fabrication can be performed, making it impractical in many instances to produce large-scale, high-resolution objects such as complex micromachines, 3D microfluidics, etc. Previously, others have demonstrated the possibility of using multiple laser foci to simultaneously perform MPL at numerous sites in parallel, but use of a stage-scanning system to specify fabrication coordinates resulted in the production of identical features at each focal position. As a more general solution to the bottleneck problem, we demonstrate here the feasibility for performing multi-focal MPL using a dynamic mask to differentially modulate foci, an approach that enables each fabrication site to create independent (uncorrelated) features within a larger, integrated microform. In this proof-of-concept study, two simultaneously scanned foci produced the expected two-fold decrease in fabrication time, and this approach could be readily extended to many scanning foci by using a more powerful laser. Finally, we show that use of multiple foci in MPL can be exploited to assign heterogeneous properties (such as differential swelling) to micromaterials at distinct positions within a fabrication zone.
Situating Green Infrastructure in Context: A Framework for Adaptive Socio-Hydrology in Cities.
Schifman, L A; Herrmann, D L; Shuster, W D; Ossola, A; Garmestani, A; Hopton, M E
2017-12-01
Management of urban hydrologic processes using green infrastructure (GI) has largely focused on stormwater management. Thus, design and implementation of GI usually rely on physical site characteristics and local rainfall patterns, and do not typically account for human or social dimensions. This traditional approach leads to highly centralized stormwater management in a disconnected urban landscape, and can deemphasize additional benefits that GI offers, such as increased property value, greenspace aesthetics, heat island amelioration, carbon sequestration, and habitat for biodiversity. We propose a Framework for Adaptive Socio-Hydrology (FrASH) in which GI planning and implementation moves from a purely hydrology-driven perspective to an integrated socio-hydrological approach. This allows for an iterative, multifaceted decision-making process that would enable a network of stakeholders to collaboratively set a dynamic, context-guided project plan for the installation of GI, rather than a 'one-size-fits-all' installation. We explain how different sectors (e.g., governance, non-governmental organizations, academia, and industry) can create a connected network of organizations that work towards a common goal. Through a graphical Chambered Nautilus model, FrASH is experimentally applied to contrasting GI case studies and shows that this multi-stakeholder, connected, de-centralized network with a co-evolving decision-making project plan results in enhanced multi-functionality, potentially allowing for the management of resilience in urban systems at multiple scales.
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.
NASA Astrophysics Data System (ADS)
Bye, B. L.; Godøy, Ø.
2014-12-01
Environmental and climate changes are important elements of our global challenges. They are observed at a global scale and in particular in the Arctic. In order to give better estimates of the future changes, the Arctic has to be monitored and analyzed by a multi-disciplinary observation system that will improve Earth System Models. The best chance to achieve significant results within a relatively short time frame is found in regions with a large natural climate gradient, and where processes sensitive to the expected changes are particularly important. Svalbard and the surrounding ocean areas fulfil all these criteria. The vision for SIOS is to be a regional observational system for long term acquisition and proliferation of fundamental knowledge on global environmental change within an Earth System Science perspective in and around Svalbard. SIOS will systematically develop and implement methods for how observational networks are to be construed. The distributed SIOS data management system (SDMS) will be implemented through a combination of technologies tailored to the multi-disciplinary nature of the Arctic data. One of these technologies is The Brokering approach or "Framework". The Brokering approach provides a series of services such as discovery, access, transformation and semantics support to enable translation from one discipline/culture to another. This is exactly the challenges the SDMS will have to handle and thus the Brokering approach is integrated in the design of the system. A description of the design strategy for the SDMS that includes The Brokering approach will be presented. The design and implementation plans for the SDMS are based on research done in the EU funded ESFRI project SIOS and examples of solutions for interoperable systems producing Arctic datasets and products coordinated through SIOS will be showcased. The reported experience from SIOS brokering approach will feed into the process of developing a sustainable brokering governance in the framework of Research Data Alliance. It will also support the Global Earth Observation System of Systems (GEOSS). This is a contribution to increase our global capacity to create interoperable systems that provide multi-disciplinary dataset and products.
NASA Astrophysics Data System (ADS)
Weltzin, J. F.
2016-12-01
Earth observations from a variety of platforms and across a range of scales are required to support research, natural resource management, and policy- and decision-making in a changing world. Integrated earth observation data provides multi-faceted information critical to decision support, vulnerability and change detection, risk assessments, early warning and modeling, simulation and forecasting in the natural resource societal benefit area. The USA National Phenology Network (USA-NPN; www.usanpn.org) is a national-scale science and monitoring initiative focused on phenology - the study of seasonal life-cycle events such as leafing, flowering, reproduction, and migration - as a tool to understand the response of biodiversity to environmental variation and change. USA-NPN provides a hierarchical, national monitoring framework that enables other organizations to leverage the capacity of the Network for their own applications - minimizing investment and duplication of effort - while promoting interoperability and sustainability. Over the last decade, the network has focused on the development of a centralized database for in-situ (ground based) observations of plants and animals, now with 8 M records for the period 1954-present. More recently, we have developed a workflow for the production and validation of spatially gridded phenology products based on models that couple the organismal data with climatological and meteorological data at daily time-steps and relatively fine spatial resolutions ( 2.5 km to 4 km). These gridded data are now ripe for integration with other modeled or earth observation gridded data, e.g., indices of drought impact or land surface reflectance. This greatly broadens capacity to scale organismal observational data to landscapes and regions, and enables novel investigations of biophysical interactions at unprecedented scales, e.g., continental-scale migrations. Sustainability emerges from identification of stakeholder needs, segmentation of target audiences (e.g., data contributors, data consumers), documentation of all aspects of the data production and delivery process, development of collaborative partnerships, enterprise approaches to information management, and excellent customer service.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
An Integrated Scale for Measuring an Organizational Learning System
ERIC Educational Resources Information Center
Jyothibabu, C.; Farooq, Ayesha; Pradhan, Bibhuti Bhusan
2010-01-01
Purpose: The purpose of this paper is to develop an integrated measurement scale for an organizational learning system by capturing the learning enablers, learning results and performance outcome in an organization. Design/methodology/approach: A new measurement scale was developed by integrating and modifying two existing scales, identified…
Towards a macrosystems approach for successful coastal management
Managing coastal resources for resiliency and sustainability often requires integrative, multi-disciplinary approaches across varying spatial and temporal scales to engage stakeholders and inform decision-makers. We discuss case studies integrating wetland ecology, economics, soc...
The MPLEx Protocol for Multi-omic Analyses of Soil Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.
Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less
The Potential for Double-Loop Learning to Enable Landscape Conservation Efforts
NASA Astrophysics Data System (ADS)
Petersen, Brian; Montambault, Jensen; Koopman, Marni
2014-10-01
As conservation increases its emphasis on implementing change at landscape-level scales, multi-agency, cross-boundary, and multi-stakeholder networks become more important. These elements complicate traditional notions of learning. To investigate this further, we examined structures of learning in the Landscape Conservation Cooperatives (LCCs), which include the entire US and its territories, as well as parts of Canada, Mexico, and Caribbean and Pacific island states. We used semi-structured interviews, transcribed and analyzed using NVivo, as well as a charrette-style workshop to understand the difference between the original stated goals of individual LCCs and the values and purposes expressed as the collaboration matured. We suggest double-loop learning as a theoretical framework appropriate to landscape-scale conservation, recognizing that concerns about accountability are among the valid points of view that must be considered in multi-stakeholder collaborations. Methods from the social sciences and public health sectors provide insights on how such learning might be actualized.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
Haspel, Nurit; Geisbrecht, Brian V; Lambris, John; Kavraki, Lydia
2010-03-01
We present a novel multi-level methodology to explore and characterize the low energy landscape and the thermodynamics of proteins. Traditional conformational search methods typically explore only a small portion of the conformational space of proteins and are hard to apply to large proteins due to the large amount of calculations required. In our multi-scale approach, we first provide an initial characterization of the equilibrium state ensemble of a protein using an efficient computational conformational sampling method. We then enrich the obtained ensemble by performing short Molecular Dynamics (MD) simulations on selected conformations from the ensembles as starting points. To facilitate the analysis of the results, we project the resulting conformations on a low-dimensional landscape to efficiently focus on important interactions and examine low energy regions. This methodology provides a more extensive sampling of the low energy landscape than an MD simulation starting from a single crystal structure as it explores multiple trajectories of the protein. This enables us to obtain a broader view of the dynamics of proteins and it can help in understanding complex binding, improving docking results and more. In this work, we apply the methodology to provide an extensive characterization of the bound complexes of the C3d fragment of human Complement component C3 and one of its powerful bacterial inhibitors, the inhibitory domain of Staphylococcus aureus extra-cellular fibrinogen-binding domain (Efb-C) and two of its mutants. We characterize several important interactions along the binding interface and define low free energy regions in the three complexes. Proteins 2010. (c) 2009 Wiley-Liss, Inc.
Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida
2016-09-01
Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.
One-Year stable perovskite solar cells by 2D/3D interface engineering
NASA Astrophysics Data System (ADS)
Grancini, G.; Roldán-Carmona, C.; Zimmermann, I.; Mosconi, E.; Lee, X.; Martineau, D.; Narbey, S.; Oswald, F.; de Angelis, F.; Graetzel, M.; Nazeeruddin, Mohammad Khaja
2017-06-01
Despite the impressive photovoltaic performances with power conversion efficiency beyond 22%, perovskite solar cells are poorly stable under operation, failing by far the market requirements. Various technological approaches have been proposed to overcome the instability problem, which, while delivering appreciable incremental improvements, are still far from a market-proof solution. Here we show one-year stable perovskite devices by engineering an ultra-stable 2D/3D (HOOC(CH2)4NH3)2PbI4/CH3NH3PbI3 perovskite junction. The 2D/3D forms an exceptional gradually-organized multi-dimensional interface that yields up to 12.9% efficiency in a carbon-based architecture, and 14.6% in standard mesoporous solar cells. To demonstrate the up-scale potential of our technology, we fabricate 10 × 10 cm2 solar modules by a fully printable industrial-scale process, delivering 11.2% efficiency stable for >10,000 h with zero loss in performances measured under controlled standard conditions. This innovative stable and low-cost architecture will enable the timely commercialization of perovskite solar cells.
GIANT 2.0: genome-scale integrated analysis of gene networks in tissues.
Wong, Aaron K; Krishnan, Arjun; Troyanskaya, Olga G
2018-05-25
GIANT2 (Genome-wide Integrated Analysis of gene Networks in Tissues) is an interactive web server that enables biomedical researchers to analyze their proteins and pathways of interest and generate hypotheses in the context of genome-scale functional maps of human tissues. The precise actions of genes are frequently dependent on their tissue context, yet direct assay of tissue-specific protein function and interactions remains infeasible in many normal human tissues and cell-types. With GIANT2, researchers can explore predicted tissue-specific functional roles of genes and reveal changes in those roles across tissues, all through interactive multi-network visualizations and analyses. Additionally, the NetWAS approach available through the server uses tissue-specific/cell-type networks predicted by GIANT2 to re-prioritize statistical associations from GWAS studies and identify disease-associated genes. GIANT2 predicts tissue-specific interactions by integrating diverse functional genomics data from now over 61 400 experiments for 283 diverse tissues and cell-types. GIANT2 does not require any registration or installation and is freely available for use at http://giant-v2.princeton.edu.
One-Year stable perovskite solar cells by 2D/3D interface engineering
Grancini, G.; Roldán-Carmona, C.; Zimmermann, I.; Mosconi, E.; Lee, X.; Martineau, D.; Narbey, S.; Oswald, F.; De Angelis, F.; Graetzel, M.; Nazeeruddin, Mohammad Khaja
2017-01-01
Despite the impressive photovoltaic performances with power conversion efficiency beyond 22%, perovskite solar cells are poorly stable under operation, failing by far the market requirements. Various technological approaches have been proposed to overcome the instability problem, which, while delivering appreciable incremental improvements, are still far from a market-proof solution. Here we show one-year stable perovskite devices by engineering an ultra-stable 2D/3D (HOOC(CH2)4NH3)2PbI4/CH3NH3PbI3 perovskite junction. The 2D/3D forms an exceptional gradually-organized multi-dimensional interface that yields up to 12.9% efficiency in a carbon-based architecture, and 14.6% in standard mesoporous solar cells. To demonstrate the up-scale potential of our technology, we fabricate 10 × 10 cm2 solar modules by a fully printable industrial-scale process, delivering 11.2% efficiency stable for >10,000 h with zero loss in performances measured under controlled standard conditions. This innovative stable and low-cost architecture will enable the timely commercialization of perovskite solar cells. PMID:28569749
One-Year stable perovskite solar cells by 2D/3D interface engineering.
Grancini, G; Roldán-Carmona, C; Zimmermann, I; Mosconi, E; Lee, X; Martineau, D; Narbey, S; Oswald, F; De Angelis, F; Graetzel, M; Nazeeruddin, Mohammad Khaja
2017-06-01
Despite the impressive photovoltaic performances with power conversion efficiency beyond 22%, perovskite solar cells are poorly stable under operation, failing by far the market requirements. Various technological approaches have been proposed to overcome the instability problem, which, while delivering appreciable incremental improvements, are still far from a market-proof solution. Here we show one-year stable perovskite devices by engineering an ultra-stable 2D/3D (HOOC(CH 2 ) 4 NH 3 ) 2 PbI 4 /CH 3 NH 3 PbI 3 perovskite junction. The 2D/3D forms an exceptional gradually-organized multi-dimensional interface that yields up to 12.9% efficiency in a carbon-based architecture, and 14.6% in standard mesoporous solar cells. To demonstrate the up-scale potential of our technology, we fabricate 10 × 10 cm 2 solar modules by a fully printable industrial-scale process, delivering 11.2% efficiency stable for >10,000 h with zero loss in performances measured under controlled standard conditions. This innovative stable and low-cost architecture will enable the timely commercialization of perovskite solar cells.
Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets
NASA Astrophysics Data System (ADS)
Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo
2017-11-01
Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaffner, Michael
2014-06-01
The current downward trend in funding for U.S. defense systems seems to be on a collision course with the state of the practice in systems engineering, which typically results in the increased pace and scale of capabilities and resultantly increased cost of complex national defense systems. Recent advances in the state of the art in systems engineering methodology can be leveraged to address this growing challenge. The present work leverages advanced constructs and methods for early-phase conceptual design of complex systems, when committed costs are still low and management influence is still high. First, a literature review is presented ofmore » the topics relevant to this work, including approaches to the design of affordable systems, assumptions and methods of exploratory modeling, and enabling techniques to help mitigate the computational challenges involved. The types, purposes, and limits of early-phase, exploratory models are then elucidated. The RSC-based Method for Affordable Concept Selection (RMACS) is described, which comprises nine processes in the three main thrusts of information gathering, evaluation, and analysis. The method is then applied to a naval ship case example, described as the Next-Generation Combat Ship, with representational information outputs and discussions of affordability with respect to each process. The ninth process, Multi-Era Analysis (MERA), is introduced and explicated, including required and optional informational components, temporal and change-related considerations, required and optional activities involved, and the potential types of outputs from the process. The MERA process is then applied to a naval ship case example similar to that of the RMACS application, but with discrete change options added to enable a tradespace network. The seven activities of the MERA process are demonstrated, with the salient outputs of each given and discussed. Additional thoughts are presented on MERA and RMACS, and 8 distinct areas are identified for further research in the MERA process, along with a brief description of the directions that such research might take. It is concluded that the affordability of complex systems can be better enabled through a conceptual design method that incorporates MERA as well as metrics such as Multi-Attribute Expense, Max Expense, and Expense Stability. It is also found that affordability of changeable systems can be better enabled through the use of existing path-planning algorithms in efficient evaluation and analysis of long-term strategies. Finally, it is found that MERA enables the identification and analysis of path-dependent considerations related to designs, epochs, strategies, and change options, in many possible futures.« less
Multiscale recurrence analysis of spatio-temporal data
NASA Astrophysics Data System (ADS)
Riedl, M.; Marwan, N.; Kurths, J.
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Multiscale recurrence analysis of spatio-temporal data.
Riedl, M; Marwan, N; Kurths, J
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
NGS2: a focal plane array upgrade for the GeMS multiple tip-tilt wavefront sensor
NASA Astrophysics Data System (ADS)
Rigaut, François; Price, Ian; d'Orgeville, Céline; Bennet, Francis; Herrald, Nick; Paulin, Nicolas; Uhlendorf, Kristina; Garrel, Vincent; Sivo, Gaetano; Montes, Vanessa; Trujillo, Chad
2016-07-01
NGS2 is an upgrade for the multi-natural guide star tip-tilt & plate scale wavefront sensor for GeMS (Gemini Multi-Conjugate Adaptive Optics system). It uses a single Nüvü HNü-512 Electron-Multiplied CCD array that spans the entire GeMS wavefront sensor focal plane. Multiple small regions-of-interest are used to enable frame rates up to 800Hz. This set up will improve the optical throughput with respect to the current wavefront sensor, as well as streamline acquisition and allow for distortion compensation.
Earth Studies Using L-band Synthetic Aperture Radar
NASA Technical Reports Server (NTRS)
Rosen, Paul A.
1999-01-01
L-band SAR has played an important role in studies of the Earth by revealing the nature of the larger-scale (decimeter) surface features. JERS-1, by supplying multi-seasonal coverage of the much of the earth, has demonstrated the importance of L-band SARs. Future L-band SARs such as ALOS and LightSAR will pave the way for science missions that use SAR instruments. As technology develops to enable lower cost SAR instruments, missions will evolve to each have a unique science focus. International coordination of multi-parameter constellations and campaigns will maximize science return.
Recent Results from NASA's Morphing Project
NASA Technical Reports Server (NTRS)
McGowan, Anna-Maria R.; Washburn, Anthony E.; Horta, Lucas G.; Bryant, Robert G.; Cox, David E.; Siochi, Emilie J.; Padula, Sharon L.; Holloway, Nancy M.
2002-01-01
The NASA Morphing Project seeks to develop and assess advanced technologies and integrated component concepts to enable efficient, multi-point adaptability in air and space vehicles. In the context of the project, the word "morphing" is defined as "efficient, multi-point adaptability" and may include macro, micro, structural and/or fluidic approaches. The project includes research on smart materials, adaptive structures, micro flow control, biomimetic concepts, optimization and controls. This paper presents an updated overview of the content of the Morphing Project including highlights of recent research results.
Alternating-Offers Protocol for Multi-issue Bilateral Negotiation in Semantic-Enabled Marketplaces
NASA Astrophysics Data System (ADS)
Ragone, Azzurra; di Noia, Tommaso; di Sciascio, Eugenio; Donini, Francesco M.
We present a semantic-based approach to multi-issue bilateral negotiation for e-commerce. We use Description Logics to model advertisements, and relations among issues as axioms in a TBox. We then introduce a logic-based alternating-offers protocol, able to handle conflicting information, that merges non-standard reasoning services in Description Logics with utility thoery to find the most suitable agreements. We illustrate and motivate the theoretical framework, the logical language, and the negotiation protocol.
Wang, Ruifei; Unrean, Pornkamol; Franzén, Carl Johan
2016-01-01
High content of water-insoluble solids (WIS) is required for simultaneous saccharification and co-fermentation (SSCF) operations to reach the high ethanol concentrations that meet the techno-economic requirements of industrial-scale production. The fundamental challenges of such processes are related to the high viscosity and inhibitor contents of the medium. Poor mass transfer and inhibition of the yeast lead to decreased ethanol yield, titre and productivity. In the present work, high-solid SSCF of pre-treated wheat straw was carried out by multi-feed SSCF which is a fed-batch process with additions of substrate, enzymes and cells, integrated with yeast propagation and adaptation on the pre-treatment liquor. The combined feeding strategies were systematically compared and optimized using experiments and simulations. For high-solid SSCF process of SO2-catalyzed steam pre-treated wheat straw, the boosted solubilisation of WIS achieved by having all enzyme loaded at the beginning of the process is crucial for increased rates of both enzymatic hydrolysis and SSCF. A kinetic model was adapted to simulate the release of sugars during separate hydrolysis as well as during SSCF. Feeding of solid substrate to reach the instantaneous WIS content of 13 % (w/w) was carried out when 60 % of the cellulose was hydrolysed, according to simulation results. With this approach, accumulated WIS additions reached more than 20 % (w/w) without encountering mixing problems in a standard bioreactor. Feeding fresh cells to the SSCF reactor maintained the fermentation activity, which otherwise ceased when the ethanol concentration reached 40-45 g L(-1). In lab scale, the optimized multi-feed SSCF produced 57 g L(-1) ethanol in 72 h. The process was reproducible and resulted in 52 g L(-1) ethanol in 10 m(3) scale at the SP Biorefinery Demo Plant. SSCF of WIS content up to 22 % (w/w) is reproducible and scalable with the multi-feed SSCF configuration and model-aided process design. For simultaneous saccharification and fermentation, the overall efficiency relies on balanced rates of substrate feeding and conversion. Multi-feed SSCF provides the possibilities to balance interdependent rates by systematic optimization of the feeding strategies. The optimization routine presented in this work can easily be adapted for optimization of other lignocellulose-based fermentation systems.
Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise
Murakami, Hisashi
2018-01-01
Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074
Sexual networks: measuring sexual selection in structured, polyandrous populations.
McDonald, Grant C; James, Richard; Krause, Jens; Pizzari, Tommaso
2013-03-05
Sexual selection is traditionally measured at the population level, assuming that populations lack structure. However, increasing evidence undermines this approach, indicating that intrasexual competition in natural populations often displays complex patterns of spatial and temporal structure. This complexity is due in part to the degree and mechanisms of polyandry within a population, which can influence the intensity and scale of both pre- and post-copulatory sexual competition. Attempts to measure selection at the local and global scale have been made through multi-level selection approaches. However, definitions of local scale are often based on physical proximity, providing a rather coarse measure of local competition, particularly in polyandrous populations where the local scale of pre- and post-copulatory competition may differ drastically from each other. These limitations can be solved by social network analysis, which allows us to define a unique sexual environment for each member of a population: 'local scale' competition, therefore, becomes an emergent property of a sexual network. Here, we first propose a novel quantitative approach to measure pre- and post-copulatory sexual selection, which integrates multi-level selection with information on local scale competition derived as an emergent property of networks of sexual interactions. We then use simple simulations to illustrate the ways in which polyandry can impact estimates of sexual selection. We show that for intermediate levels of polyandry, the proposed network-based approach provides substantially more accurate measures of sexual selection than the more traditional population-level approach. We argue that the increasing availability of fine-grained behavioural datasets provides exciting new opportunities to develop network approaches to study sexual selection in complex societies.
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks
Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher
2018-01-01
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.
Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G
2018-04-03
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.
The Impact of Large, Multi-Function/Multi-Site Competitions
2003-08-01
this approach generates larger savings and improved service quality , and is less expensive to implement. Moreover, it is a way to meet the President s...of the study is to assess the degree to which large-scale competitions completed have resulted in increased savings and service quality and decreased
A scale space feature based registration technique for fusion of satellite imagery
NASA Technical Reports Server (NTRS)
Raghavan, Srini; Cromp, Robert F.; Campbell, William C.
1997-01-01
Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Montgomery, David S.
2016-04-14
Our understanding of laser-plasma instability (LPI) physics has improved dramatically over the past two decades through advancements in experimental techniques, diagnostics, and theoretical and modeling approaches. We have progressed from single-beam experiments—ns pulses with ~kJ energy incident on hundred-micron-scale target plasmas with ~keV electron temperatures—to ones involving nearly 2 MJ energy in 192 beams onto multi-mm-scale plasmas with temperatures ~4 keV. At the same time, we have also been able to use smaller-scale laser facilities to substantially improve our understanding of LPI physics and evaluate novel approaches to their control. These efforts have led to a change in paradigm formore » LPI research, ushering in an era of engineering LPI to accomplish specific objectives, from tuning capsule implosion symmetry to fixing nonlinear saturation of LPI processes at acceptable levels to enable the exploration of high energy density physics in novel plasma regimes. A tutorial is provided that reviews the progress in the field from the vantage of the foundational LPI experimental results. The pedagogical framework of the simplest models of LPI will be employed, but attention will also be paid to settings where more sophisticated models are needed to understand the observations. Prospects for the application of our improved understanding for inertial fusion (both indirect- and direct-drive) and other applications will also be discussed.« less
Diagnosis of sustainable collaboration in health promotion – a case study
Leurs, Mariken TW; Mur-Veeman, Ingrid M; van der Sar, Rosalie; Schaalma, Herman P; de Vries, Nanne K
2008-01-01
Background Collaborations are important to health promotion in addressing multi-party problems. Interest in collaborative processes in health promotion is rising, but still lacks monitoring instruments. The authors developed the DIagnosis of Sustainable Collaboration (DISC) model to enable comprehensive monitoring of public health collaboratives. The model focuses on opportunities and impediments for collaborative change, based on evidence from interorganizational collaboration, organizational behavior and planned organizational change. To illustrate and assess the DISC-model, the 2003/2004 application of the model to the Dutch whole-school health promotion collaboration is described. Methods The study combined quantitative research, using a cross-sectional survey, with qualitative research using the personal interview methodology and document analysis. A DISC-based survey was sent to 55 stakeholders in whole-school health promotion in one Dutch region. The survey consisted of 22 scales with 3 to 8 items. Only scales with a reliability score of 0.60 were accepted. The analysis provided for comparisons between stakeholders from education, public service and public health. The survey was followed by approaching 14 stakeholders for a semi-structured DISC-based interview. As the interviews were timed after the survey, the interviews were used to clarify unexpected and unclear outcomes of the survey as well. Additionally, a DISC-based document analysis was conducted including minutes of meetings, project descriptions and correspondence with schools and municipalities. Results Response of the survey was 77% and of the interviews 86%. Significant differences between respondents of different domains were found for the following scales: organizational characteristics scale, the change strategies, network development, project management, willingness to commit and innovative actions and adaptations. The interviews provided a more specific picture of the state of the art of the studied collaboration regarding the DISC-constructs. Conclusion The DISC-model is more than just the sum of the different parameters provided in the literature on interorganizational collaboration, organization change, networking and setting-approaches. Monitoring a collaboration based on the DISC-model yields insight into windows of opportunity and current impediments for collaborative change. DISC-based monitoring is a promising strategy enabling project managers and social entrepreneurs to plan change management strategies systematically. PMID:18992132
Addressing data privacy in matched studies via virtual pooling.
Saha-Chaudhuri, P; Weinberg, C R
2017-09-07
Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.
NASA Astrophysics Data System (ADS)
Hixson, J.; Ward, A. S.; Schmadel, N.
2015-12-01
The exchange of water and solutes across the stream-hyporheic-riparian-hillslope continuum is controlled by the interaction of dynamic hydrological processes with the underlying geological setting. Our current understanding of exchange processes is primarily based on field observations collected during baseflow conditions, with few studies considering time-variable stream-aquifer interactions during storm events. We completed ten sets of four in-stream tracer slug injections during and after a large storm event in a headwater catchment at the H.J. Andrews Experimental Forest, Oregon. The injections were performed in three adjacent 50-meter study reaches, enabling comparison of spatial heterogeneity in transport processes. Reach-scale data demonstrate apparent trends with discharge in both transient storage and long-term storage (commonly "channel water balance"). Comparison of flowpath-scale observations from a network of monitoring wells to reach-scale observations showed that the advective timescale changed with discharge making it difficult to infer process from simple, reach-scale tracer studies. Overall, our results highlight the opportunities and challenges for interpretation of multi-scale solute tracer data along the stream-hyporheic-riparian-hillslope continuum.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Dudley, Joel T; Listgarten, Jennifer; Stegle, Oliver; Brenner, Steven E; Parts, Leopold
2015-01-01
Advances in molecular profiling and sensor technologies are expanding the scope of personalized medicine beyond genotypes, providing new opportunities for developing richer and more dynamic multi-scale models of individual health. Recent studies demonstrate the value of scoring high-dimensional microbiome, immune, and metabolic traits from individuals to inform personalized medicine. Efforts to integrate multiple dimensions of clinical and molecular data towards predictive multi-scale models of individual health and wellness are already underway. Improved methods for mining and discovery of clinical phenotypes from electronic medical records and technological developments in wearable sensor technologies present new opportunities for mapping and exploring the critical yet poorly characterized "phenome" and "envirome" dimensions of personalized medicine. There are ambitious new projects underway to collect multi-scale molecular, sensor, clinical, behavioral, and environmental data streams from large population cohorts longitudinally to enable more comprehensive and dynamic models of individual biology and personalized health. Personalized medicine stands to benefit from inclusion of rich new sources and dimensions of data. However, realizing these improvements in care relies upon novel informatics methodologies, tools, and systems to make full use of these data to advance both the science and translational applications of personalized medicine.
NASA Astrophysics Data System (ADS)
Hagedorn, J.; Zhu, Q.; Davidson, E. A.; Castro, M.
2017-12-01
Managing resources wisely while reducing environmental impact is the backbone of agricultural sustainability. Agricultural practices must develop strategies to effectively reduce nutrient runoff from farmed lands. Preliminary research suggests that one such strategy is drainage water management by which water levels are intentionally elevated following fertilization to favor subsoil denitrification and thereby reduce nitrogen leaching into groundwater and streams. Despite documented success in nitrate reduction, this best management practice (BMP) has not been widely adopted in part because users are not aware of the potential. But before extension agencies begin promoting this practice, evaluation of unintentional consequences must be studied. There is a risk that by elevating water levels for the purpose of creating suitable conditions for denitrification, more potent greenhouse gases such as nitrous oxide (N2O) and methane (CH4) could be produced, in which case the practice would be swapping one form of pollution for another. A multi-scale experimental design, using soil chambers and a tower-based gradient method, was implemented in a drainage water managed corn-soybean system on the Eastern Shore of Maryland. Emissions, soil moisture content, and soil nitrate measurements have been collected and analyzed to evaluate for differences between treatment and control plots as standard farm management practices, such as fertilization, occur. Preliminary results based on monthly sampling of transects of stationary soil chambers characterize the spatial heterogeneity of the fields and reveal that there are detectable differences in N2O and CH4 emissions between fields. There are also significant relationships between soil moisture, soil nitrate content and N2O emissions. The tower-based gradient method with micrometerological measurements provides high temporal resolution at the full field scale that complements the soil chamber work. This multi-scale resolution balance enables us to more accurately quantify this pollution swapping concern and demonstrates the efficacy of reducing nutrient runoff compared to risks of increased greenhouse gas emissions for a BMP that has transformative potential for sustainable agriculture.
An integrated GIS-based, multi-attribute decision model deployed in a web-based platform is presented enabling an iterative, spatially explicit and collaborative analysis of relevant and available information for repurposing vacant land. The process incorporated traditional and ...
A fuzzy MCDM framework based on fuzzy measure and fuzzy integral for agile supplier evaluation
NASA Astrophysics Data System (ADS)
Dursun, Mehtap
2017-06-01
Supply chains need to be agile in order to response quickly to the changes in today's competitive environment. The success of an agile supply chain depends on the firm's ability to select the most appropriate suppliers. This study proposes a multi-criteria decision making technique for conducting an analysis based on multi-level hierarchical structure and fuzzy logic for the evaluation of agile suppliers. The ideal and anti-ideal solutions are taken into consideration simultaneously in the developed approach. The proposed decision approach enables the decision-makers to use linguistic terms, and thus, reduce their cognitive burden in the evaluation process. Furthermore, a hierarchy of evaluation criteria and their related sub-criteria is employed in the presented approach in order to conduct a more effective analysis.
Innovative architecture design for high performance organic and hybrid multi-junction solar cells
NASA Astrophysics Data System (ADS)
Li, Ning; Spyropoulos, George D.; Brabec, Christoph J.
2017-08-01
The multi-junction concept is especially attractive for the photovoltaic (PV) research community owing to its potential to overcome the Schockley-Queisser limit of single-junction solar cells. Tremendous research interests are now focused on the development of high-performance absorbers and novel device architectures for emerging PV technologies, such as organic and perovskite PVs. It has been predicted that the multi-junction concept is able to boost the organic and perovskite PV technologies approaching the 20% and 30% benchmarks, respectively, showing a bright future of commercialization of the emerging PV technologies. In this contribution, we will demonstrate innovative architecture design for solution-processed, highly functional organic and hybrid multi-junction solar cells. A simple but elegant approach to fabricating organic and hybrid multi-junction solar cells will be introduced. By laminating single organic/hybrid solar cells together through an intermediate layer, the manufacturing cost and complexity of large-scale multi-junction solar cells can be significantly reduced. This smart approach to balancing the photocurrents as well as open circuit voltages in multi-junction solar cells will be demonstrated and discussed in detail.
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
A Langevin approach to multi-scale modeling
Hirvijoki, Eero
2018-04-13
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
A Langevin approach to multi-scale modeling
NASA Astrophysics Data System (ADS)
Hirvijoki, Eero
2018-04-01
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this letter, we propose a multi-scale method which allows us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. This allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.
A Langevin approach to multi-scale modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvijoki, Eero
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
Kassab, Ghassan S.; An, Gary; Sander, Edward A.; Miga, Michael; Guccione, Julius M.; Ji, Songbai; Vodovotz, Yoram
2016-01-01
In this era of tremendous technological capabilities and increased focus on improving clinical outcomes, decreasing costs, and increasing precision, there is a need for a more quantitative approach to the field of surgery. Multiscale computational modeling has the potential to bridge the gap to the emerging paradigms of Precision Medicine and Translational Systems Biology, in which quantitative metrics and data guide patient care through improved stratification, diagnosis, and therapy. Achievements by multiple groups have demonstrated the potential for 1) multiscale computational modeling, at a biological level, of diseases treated with surgery and the surgical procedure process at the level of the individual and the population; along with 2) patient-specific, computationally-enabled surgical planning, delivery, and guidance and robotically-augmented manipulation. In this perspective article, we discuss these concepts, and cite emerging examples from the fields of trauma, wound healing, and cardiac surgery. PMID:27015816
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G., E-mail: gchen@lanl.gov; Chacón, L.; Leibs, C.A.
2014-02-01
A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solvemore » is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.« less
Chen, Wei-Hua; van Noort, Vera; Lluch-Senar, Maria; Hennrich, Marco L.; H. Wodke, Judith A.; Yus, Eva; Alibés, Andreu; Roma, Guglielmo; Mende, Daniel R.; Pesavento, Christina; Typas, Athanasios; Gavin, Anne-Claude; Serrano, Luis; Bork, Peer
2016-01-01
We developed a comprehensive resource for the genome-reduced bacterium Mycoplasma pneumoniae comprising 1748 consistently generated ‘-omics’ data sets, and used it to quantify the power of antisense non-coding RNAs (ncRNAs), lysine acetylation, and protein phosphorylation in predicting protein abundance (11%, 24% and 8%, respectively). These factors taken together are four times more predictive of the proteome abundance than of mRNA abundance. In bacteria, post-translational modifications (PTMs) and ncRNA transcription were both found to increase with decreasing genomic GC-content and genome size. Thus, the evolutionary forces constraining genome size and GC-content modify the relative contributions of the different regulatory layers to proteome homeostasis, and impact more genomic and genetic features than previously appreciated. Indeed, these scaling principles will enable us to develop more informed approaches when engineering minimal synthetic genomes. PMID:26773059
Laser Amplifier Development for the Remote Sensing of CO2 from Space
NASA Technical Reports Server (NTRS)
Yu, Anthony W.; Abshire, James B.; Storm, Mark; Betin, Alexander
2015-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the approximately x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a approximately 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de
In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...
2017-07-26
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
A multi-species exchange model for fully fluctuating polymer field theory simulations.
Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H
2014-11-07
Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNab, W; Ezzedine, S; Detwiler, R
2007-02-26
Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
A scalable multi-photon coincidence detector based on superconducting nanowires.
Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K
2018-06-04
Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
Adapt Design: A Methodology for Enabling Modular Design for Mission Specific SUAS
2016-08-24
ADAPT DESIGN: A METHODOLOGY FOR ENABLING MODULAR DESIGN FOR MISSION SPECIFIC SUAS Zachary C. Fisher David Locascio K. Daniel Cooksey...vehicle’s small scale. This paper considers a different approach to SUAS design aimed at addressing this issue. In this approach, a hybrid modular and...Two types of platforms have been identified: scalable platforms where variants are produced by varying scalable design variables, and modular
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Varadharajan, C.; Detto, M.; Faybishenko, B.; Gimenez, B.; Jardine, K.; Negron Juarez, R. I.; Pastorello, G.; Powell, T.; Warren, J.; Wolfe, B.; McDowell, N. G.; Kueppers, L. M.; Chambers, J.; Agarwal, D.
2016-12-01
The U.S. Department of Energy's (DOE) Next Generation Ecosystem Experiment (NGEE) Tropics project aims to develop a process-rich tropical forest ecosystem model that is parameterized and benchmarked by field observations. Thus, data synthesis, quality assurance and quality control (QA/QC), and data product generation of a diverse and complex set of ecohydrological observations, including sapflux, leaf surface temperature, soil water content, and leaf gas exchange from sites across the Tropics, are required to support model simulations. We have developed a metadata reporting framework, implemented in conjunction with the NGEE Tropics Data Archive tool, to enable cross-site and cross-method comparison, data interpretability, and QA/QC. We employed a modified User-Centered Design approach, which involved short development cycles based on user-identified needs, and iterative testing with data providers and users. The metadata reporting framework currently has been implemented for sensor-based observations and leverages several existing metadata protocols. The framework consists of templates that define a multi-scale measurement position hierarchy, descriptions of measurement settings, and details about data collection and data file organization. The framework also enables data providers to define data-access permission settings, provenance, and referencing to enable appropriate data usage, citation, and attribution. In addition to describing the metadata reporting framework, we discuss tradeoffs and impressions from both data providers and users during the development process, focusing on the scalability, usability, and efficiency of the framework.
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Visualizing Phenology and Climate Data at the National Scale
NASA Astrophysics Data System (ADS)
Rosemartin, A.; Marsh, L.
2013-12-01
Nature's Notebook is the USA National Phenology Network's national-scale plant and animal phenology observation program, designed to address the challenges posed by global change and its impacts on ecosystems and human health. Since its inception in 2009, 2,500 participants in Nature's Notebook have submitted 2.3 million records on the phenology of 17,000 organisms across the United States. An information architecture has been developed to facilitate collaboration and participatory data collection and digitization. Browser-based and mobile applications support data submission, and a MySQL/Drupal multi-site infrastructure enables data storage, access and discovery. Web services are available for both input and export of data resources. In this presentation we will focus on a tool for visualizing phenology data at the national scale. Effective data exploration for this multi-dimensional dataset requires the ability to plot sites, select species and phenophases, graph organismal phenology through time, and view integrated precipitation and temperature data. We will demonstrate the existing tool's capacity, discuss future directions and solicit feedback from the community.
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2017-02-01
Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.
An automated approach for extracting Barrier Island morphology from digital elevation models
NASA Astrophysics Data System (ADS)
Wernette, Phillipe; Houser, Chris; Bishop, Michael P.
2016-06-01
The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.
NASA Astrophysics Data System (ADS)
Villegas, J. C.; Salazar, J. F.; Arias, P. A.; León, J. D.
2017-12-01
Land cover transformation is currently one of the most important challenges in tropical South America. These transformations occur both because of climate-related ecological perturbations, as well as in response to ongoing socio-economic processes. A fundamental difference between those two drivers is the spatial and temporal scale at which they operate. However, when considered in a larger context, both drivers affect the ability of ecosystems to provide fundamental services to society. In this work, we use a multi-scale approach to identify key-mechanisms through which land cover transformation significantly affects ecological, hydrological and ecoclimatological dynamics, potentially leading to loss of societally-critical regulation services. We propose a suite of examples spanning multiple spatial and temporal scales that illustrate the effects of land cover trnasformations in ecological, hydrological, biogeochemical and climatic functions in tropical South America. These examples highlight important global-change-effects management challenges, as well as the need to consider the feedbacks and interactions between multi-scale processes.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Gholami, Somayeh; Kompany-Zareh, Mohsen
2013-07-01
Actinomycin D (Act D), an oncogenic c-Myc promoter binder, interferes with the action of RNA polymerase. There is great demand for high-throughput technology able to monitor the activity of DNA-binding drugs. To this end, binding of 7-aminoactinomycin D (7AAD) to the duplex c-Myc promoter was investigated by use of 2D-photoluminescence emission (2D-PLE), and the resulting data were subjected to analysis by use of convenient and powerful multi-way approaches. Fluorescence measurements were performed by use of the quantum dot (QD)-conjugated c-Myc promoter. Intercalation of 7AAD within duplex base pairs resulted in efficient energy transfer from drug to QD via fluorescence resonance energy transfer (FRET). Multi-way analysis of the three-way data array obtained from titration experiments was performed by use of restricted Tucker3 and hard trilinear decomposition (HTD). These techniques enable analysis of high-dimensional and complex data from nanobiological systems which include several spectrally overlapped structures. It was almost impossible to obtain robust and meaningful information about the FRET process for such high overlap data by use of classical analysis. The soft approach had the important advantage over univariate classical methods of enabling us to investigate the source of variance in the fluorescence signal of the DNA-drug complex. It was established that hard trilinear decomposition analysis of FRET-measured data overcomes the problem of rank deficiency, enabling calculation of concentration profiles and pure spectra for all species, including non-fluorophores. The hard modeling approach was also used for determination of equilibrium constants for the hybridization and intercalation equilibria, using nonlinear fit data analysis. The intercalation constant 3.6 × 10(6) mol(-1) L and hybridization stability 1.0 × 10(8) mol(-1) L obtained were in good agreement with values reported in the literature. The analytical concentration of the QD-labeled DNA was determined by use of nonlinear fitting, without using external standard calibration samples. This study was a successful application of multi-way chemometric methods to investigation of nano-biotechnological systems where several overlapped species coexist in solution.
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
Scale invariant texture descriptors for classifying celiac disease
Hegenbart, Sebastian; Uhl, Andreas; Vécsei, Andreas; Wimmer, Georg
2013-01-01
Scale invariant texture recognition methods are applied for the computer assisted diagnosis of celiac disease. In particular, emphasis is given to techniques enhancing the scale invariance of multi-scale and multi-orientation wavelet transforms and methods based on fractal analysis. After fine-tuning to specific properties of our celiac disease imagery database, which consists of endoscopic images of the duodenum, some scale invariant (and often even viewpoint invariant) methods provide classification results improving the current state of the art. However, not each of the investigated scale invariant methods is applicable successfully to our dataset. Therefore, the scale invariance of the employed approaches is explicitly assessed and it is found that many of the analyzed methods are not as scale invariant as they theoretically should be. Results imply that scale invariance is not a key-feature required for successful classification of our celiac disease dataset. PMID:23481171
A Multi-Functional Imaging Approach to High-Content Protein Interaction Screening
Matthews, Daniel R.; Fruhwirth, Gilbert O.; Weitsman, Gregory; Carlin, Leo M.; Ofo, Enyinnaya; Keppler, Melanie; Barber, Paul R.; Tullis, Iain D. C.; Vojnovic, Borivoj; Ng, Tony; Ameer-Beg, Simon M.
2012-01-01
Functional imaging can provide a level of quantification that is not possible in what might be termed traditional high-content screening. This is due to the fact that the current state-of-the-art high-content screening systems take the approach of scaling-up single cell assays, and are therefore based on essentially pictorial measures as assay indicators. Such phenotypic analyses have become extremely sophisticated, advancing screening enormously, but this approach can still be somewhat subjective. We describe the development, and validation, of a prototype high-content screening platform that combines steady-state fluorescence anisotropy imaging with fluorescence lifetime imaging (FLIM). This functional approach allows objective, quantitative screening of small molecule libraries in protein-protein interaction assays. We discuss the development of the instrumentation, the process by which information on fluorescence resonance energy transfer (FRET) can be extracted from wide-field, acceptor fluorescence anisotropy imaging and cross-checking of this modality using lifetime imaging by time-correlated single-photon counting. Imaging of cells expressing protein constructs where eGFP and mRFP1 are linked with amino-acid chains of various lengths (7, 19 and 32 amino acids) shows the two methodologies to be highly correlated. We validate our approach using a small-scale inhibitor screen of a Cdc42 FRET biosensor probe expressed in epidermoid cancer cells (A431) in a 96 microwell-plate format. We also show that acceptor fluorescence anisotropy can be used to measure variations in hetero-FRET in protein-protein interactions. We demonstrate this using a screen of inhibitors of internalization of the transmembrane receptor, CXCR4. These assays enable us to demonstrate all the capabilities of the instrument, image processing and analytical techniques that have been developed. Direct correlation between acceptor anisotropy and donor FLIM is observed for FRET assays, providing an opportunity to rapidly screen proteins, interacting on the nano-meter scale, using wide-field imaging. PMID:22506000
Meeting the memory challenges of brain-scale network simulation.
Kunkel, Susanne; Potjans, Tobias C; Eppler, Jochen M; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus
2011-01-01
The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10(5) neurons with up to 10(9) synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project.
NASA Astrophysics Data System (ADS)
Michener, W.
2010-12-01
Addressing grand environmental science challenges requires unprecedented access to easily understood data that cross the breadth of temporal, spatial, and thematic scales. From a scientist’s perspective, the big challenges lie in discovering the relevant data, dealing with extreme data heterogeneity, and converting data to information and knowledge. Addressing these challenges requires new approaches for managing, preserving, analyzing, and sharing data. DataONE is designed to be the foundation of new innovative environmental research that addresses questions of relevance to science and society. DataONE will ensure preservation and access to multi-scale, multi-discipline, and multi-national data. Operationally, DataONE encompasses a distributed global network of Member Nodes (i.e., data repositories) that provide open and persistent access to well-described and easily discovered Earth observational data. In addition, a smaller number of Coordinating Nodes (i.e., metadata repositories and service centers) support network-wide services such as data replication and access to an array of enabling tools. DataONE’s objectives are to: make biological data available from the genome to the ecosystem; make environmental data available from atmospheric, ecological, hydrological, and oceanographic sources; provide secure and long-term preservation and access; and engage scientists, land-managers, policy makers, students, educators, and the public through logical access and intuitive visualizations. The foundation for excellence of DataONE is the established collaboration among participating organizations that have multi-decade expertise in a wide range of fields that includes: existing archive initiatives, libraries, environmental observing systems and research networks, data and information management, science synthesis centers, and professional societies. DataONE is a means to serve a broad range of science domains directly and indirectly through interoperability with partnering networks. DataONE engages its community of partners through working groups focused on identifying, describing, and implementing the DataONE cyberinfrastructure, governance, and sustainability models. These working groups, which consist of a diverse group of graduate students, educators, government and industry representatives, and leading computer, information, and library scientists: (1) perform computer science, informatics, and social science research related to all stages of the data life cycle; (2) develop DataONE interfaces and prototypes; (3) adopt/adapt interoperability standards; (4) create value-added technologies (e.g., semantic mediation, scientific workflow, and visualization) that facilitate data integration, analysis, and understanding; (5) address socio-cultural barriers to sustainable data preservation and data sharing; and (6) promote the adoption of best practices for managing the full data life cycle.
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
NASA Astrophysics Data System (ADS)
Dündar, Furkan Semih
2018-01-01
We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.
Highly Resolved Intravital Striped-illumination Microscopy of Germinal Centers
Andresen, Volker; Sporbert, Anje
2014-01-01
Monitoring cellular communication by intravital deep-tissue multi-photon microscopy is the key for understanding the fate of immune cells within thick tissue samples and organs in health and disease. By controlling the scanning pattern in multi-photon microscopy and applying appropriate numerical algorithms, we developed a striped-illumination approach, which enabled us to achieve 3-fold better axial resolution and improved signal-to-noise ratio, i.e. contrast, in more than 100 µm tissue depth within highly scattering tissue of lymphoid organs as compared to standard multi-photon microscopy. The acquisition speed as well as photobleaching and photodamage effects were similar to standard photo-multiplier-based technique, whereas the imaging depth was slightly lower due to the use of field detectors. By using the striped-illumination approach, we are able to observe the dynamics of immune complex deposits on secondary follicular dendritic cells – on the level of a few protein molecules in germinal centers. PMID:24748007
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku
2013-01-01
Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426
Modeling small-scale dairy farms in central Mexico using multi-criteria programming.
Val-Arreola, D; Kebreab, E; France, J
2006-05-01
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multi-criteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, ryegrass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
NASA Astrophysics Data System (ADS)
Kostopoulos, Vassilis; Vavouliotis, Antonios; Baltopoulos, Athanasios; Sotiririadis, George; Masouras, Athanasios; Pambaguian, Laurent
2014-06-01
The past decade, extensive efforts have been invested in understanding the nano-scale and revealing the capabilities offered by nanotechnology products to structural materials. Nevertheless, a major issue faced lately more seriously due to the interest of industry is on how to incorporate these nano-species into the final composite structure through existing manufacturing processes and infrastructure. In this work, we present the experience obtained from the latest nanotechnology research activities supported by ESA. The paper focuses on prepreg composite manufacturing technology and addresses:- Approaches for nano-enabling of composites- Up-scaling strategies towards final structures- Latest results on performance of nano-enabledfiber reinforced compositesSeveral approaches for the utilization of nanotechnology products in structural composite structures have been proposed and are reviewed, in short along with respective achieved results. A variety of nano-fillers has been proposed and employed, individually or in combination in hybrid forms, to approach the desired performance. A major part of the work deals with the up-scaling routes of these technologies to reach final products and industrial scales and processes while meeting end-user performance.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Local variance for multi-scale analysis in geomorphometry.
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-07-15
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
NASA Astrophysics Data System (ADS)
Baumgart, M.; Druml, N.; Consani, M.
2018-05-01
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
NASA Astrophysics Data System (ADS)
Cartier, V.; Claret, C.; Garnier, R.; Fayolle, S.; Franquet, E.
2010-03-01
The complexity of the relationships between environmental factors and organisms can be revealed by sampling designs which consider the contribution to variability of different temporal and spatial scales, compared to total variability. From a management perspective, a multi-scale approach can lead to time-saving. Identifying environmental patterns that help maintain patchy distribution is fundamental in studying coastal lagoons, transition zones between continental and marine waters characterised by great environmental variability on spatial and temporal scales. They often present organic enrichment inducing decreased species richness and increased densities of opportunist species like C hironomus salinarius, a common species that tends to swarm and thus constitutes a nuisance for human populations. This species is dominant in the Bolmon lagoon, a French Mediterranean coastal lagoon under eutrophication. Our objective was to quantify variability due to both spatial and temporal scales and identify the contribution of different environmental factors to this variability. The population of C. salinarius was sampled from June 2007 to June 2008 every two months at 12 sites located in two areas of the Bolmon lagoon, at two different depths, with three sites per area-depth combination. Environmental factors (temperature, dissolved oxygen both in sediment and under water surface, sediment organic matter content and grain size) and microbial activities (i.e. hydrolase activities) were also considered as explanatory factors of chironomid densities and distribution. ANOVA analysis reveals significant spatial differences regarding the distribution of chironomid larvae for the area and the depth scales and their interaction. The spatial effect is also revealed for dissolved oxygen (water), salinity and fine particles (area scale), and for water column depth. All factors but water column depth show a temporal effect. Spearman's correlations highlight the seasonal effect (temperature, dissolved oxygen in sediment and water) as well as the effect of microbial activities on chironomid larvae. Our results show that a multi-scale approach identifies patchy distribution, even when there is relative environmental homogeneity.
NASA Astrophysics Data System (ADS)
Doering, Robert
In the early 1980s, the semiconductor industry faced the related challenges of ``scaling through the one-micron barrier'' and converting single-level-metal NMOS integrated circuits to multi-level-metal CMOS. Multiple advances in lithography technology and device materials/process integration led the way toward the deep-sub-micron transistors and interconnects that characterize today's electronic chips. In the 1990s, CMOS scaling advanced at an accelerated pace enabled by rapid advances in many aspects of optical lithography. However, the industry also needed to continue the progress in manufacturing on ever-larger silicon wafers to maintain economy-of-scale trends. Simultaneously, the increasing complexity and absolute-precision requirements of manufacturing compounded the necessity for new processes, tools, and control methodologies. This talk presents a personal perspective on some of the approaches that addressed the aforementioned challenges. In particular, early work on integrating silicides, lightly-doped-drain FETs, shallow recessed isolation, and double-level metal will be discussed. In addition, some pioneering efforts in deep-UV lithography and single-wafer processing will be covered. The latter will be mainly based on results from the MMST Program - a 100 M +, 5-year R&D effort, funded by DARPA, the U.S. Air Force, and Texas Instruments, that developed a wide range of new technologies for advanced semiconductor manufacturing. The major highlight of the program was the demonstration of sub-3-day cycle time for manufacturing 350-nm CMOS integrated circuits in 1993. This was principally enabled by the development of: (1) 100% single-wafer processing, including rapid-thermal processing (RTP), and (2) computer-integrated-manufacturing (CIM), including real-time, in-situ process control.
ERIC Educational Resources Information Center
Rajbhandari, Mani Man Singh; Loock, Coert; Du Plessis, Pierre; Rajbhandari, Smriti
2014-01-01
In educational settings, leadership flexibility and mobility is essential factor for leadership readiness. This incorporates both factors concerning the situational needs and followership situational readiness. Leadership in education require multi facet dimensional approaches that enables the educational leaders to fill in the gaps and reduces…
DOT National Transportation Integrated Search
2016-12-19
The efforts of this project aim to capture and engage these potentials through a design-research method that incorporates a top down, data-driven approach with bottom-up stakeholder perspectives to develop prototypical scenario-based design solutions...
Demuzere, M; Orru, K; Heidrich, O; Olazabal, E; Geneletti, D; Orru, H; Bhave, A G; Mittal, N; Feliu, E; Faehnle, M
2014-12-15
In order to develop climate resilient urban areas and reduce emissions, several opportunities exist starting from conscious planning and design of green (and blue) spaces in these landscapes. Green urban infrastructure has been regarded as beneficial, e.g. by balancing water flows, providing thermal comfort. This article explores the existing evidence on the contribution of green spaces to climate change mitigation and adaptation services. We suggest a framework of ecosystem services for systematizing the evidence on the provision of bio-physical benefits (e.g. CO2 sequestration) as well as social and psychological benefits (e.g. improved health) that enable coping with (adaptation) or reducing the adverse effects (mitigation) of climate change. The multi-functional and multi-scale nature of green urban infrastructure complicates the categorization of services and benefits, since in reality the interactions between various benefits are manifold and appear on different scales. We will show the relevance of the benefits from green urban infrastructures on three spatial scales (i.e. city, neighborhood and site specific scales). We will further report on co-benefits and trade-offs between the various services indicating that a benefit could in turn be detrimental in relation to other functions. The manuscript identifies avenues for further research on the role of green urban infrastructure, in different types of cities, climates and social contexts. Our systematic understanding of the bio-physical and social processes defining various services allows targeting stressors that may hamper the provision of green urban infrastructure services in individual behavior as well as in wider planning and environmental management in urban areas. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
NASA Astrophysics Data System (ADS)
Ravi, Sathish Kumar; Gawad, Jerzy; Seefeldt, Marc; Van Bael, Albert; Roose, Dirk
2017-10-01
A numerical multi-scale model is being developed to predict the anisotropic macroscopic material response of multi-phase steel. The embedded microstructure is given by a meso-scale Representative Volume Element (RVE), which holds the most relevant features like phase distribution, grain orientation, morphology etc., in sufficient detail to describe the multi-phase behavior of the material. A Finite Element (FE) mesh of the RVE is constructed using statistical information from individual phases such as grain size distribution and ODF. The material response of the RVE is obtained for selected loading/deformation modes through numerical FE simulations in Abaqus. For the elasto-plastic response of the individual grains, single crystal plasticity based plastic potential functions are proposed as Abaqus material definitions. The plastic potential functions are derived using the Facet method for individual phases in the microstructure at the level of single grains. The proposed method is a new modeling framework and the results presented in terms of macroscopic flow curves are based on the building blocks of the approach, while the model would eventually facilitate the construction of an anisotropic yield locus of the underlying multi-phase microstructure derived from a crystal plasticity based framework.
NASA Astrophysics Data System (ADS)
Field, C. B.
2012-12-01
Modeling climate change impacts is challenging for a variety of reasons. Some of these are related to causation. A weather or climate event is rarely the sole cause of an impact, and, for many impacts, social, economic, cultural, or ecological factors may play a larger role than climate. Other challenges are related to outcomes. Consequences of an event are often most severe when several kinds of responses interact, typically in unexpected ways. Many kinds of consequences are difficult to quantify, especially when they include a mix of market, cultural, personal, and ecological values. In addition, scale can be tremendously important. Modest impacts over large areas present very different challenges than severe but very local impacts. Finally, impacts may respond non-linearly to forcing, with behavior that changes qualitatively at one or more thresholds and with unexpected outcomes in extremes. Modeling these potentially complex interactions between drivers and impacts presents one set of challenges. Evaluating the models presents another. At least five kinds of approaches can contribute to the evaluation of impact models designed to provide insights in multi-driver, multi-responder, multi-scale, and extreme-driven contexts, even though none of these approaches is a complete or "silver-bullet" solution. The starting point for much of the evaluation in this space is case studies. Case studies can help illustrate links between processes and scales. They can highlight factors that amplify or suppress sensitivity to climate drivers, and they can suggest the consequences of intervening at different points. While case studies rarely provide concrete evidence about mechanisms, they can help move a mechanistic case from circumstantial to sound. Novel approaches to data collection, including crowd sourcing, can potentially provide tools and the number of relevant examples to develop case studies as statistically robust data sources. A critical condition for progress in this area is the ability to utilize data of uneven quality and standards. Novel approaches to meta-analysis provide other options for taking advantage of diverse case studies. Techniques for summarizing responses across impacts, drivers, and scales can play a huge role in increasing the value of information from case studies. In some cases, expert elicitation may provide alternatives for identifying mechanisms or for interpreting multi-factor drivers or responses. Especially when designed to focus on a well-defined set of observations, a sophisticated elicitation can establish formal confidence limits on responses that are otherwise difficult to constrain. A final possible approach involves a focus on the mechanisms contributing to an impact, rather than the impact itself. Approaches based on quantified mechanisms are especially appealing in the context of models where the number of interactions makes it difficult to intuitively understand the chain of connections from cause to effect, when actors differ in goals or sensitivities, or when scale affects parts of the system differently. With all of these approaches, useful evidence may not conform to traditional levels of statistical confidence. Some of the biggest challenges in taking advantage of the potential tools will involve defining what constitutes a meaningful evaluation.
Blazing Signature Filter: a library for fast pairwise similarity comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan
Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tian, F.; Hu, H.; Yang, P.
2014-03-01
A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: a photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. To upscale the evapotranspiration from the leaf to plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. To upscale the evapotranspiration from the plant to field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationship between leaf areas and stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling was slightly higher (18%) than that obtained by sap flow. At the field scale, the estimates of transpiration derived from sap flow with upscaling and eddy covariance showed reasonable consistency during the cotton's open-boll growth stage, during which soil evaporation can be neglected. The results indicate that the proposed upscaling approaches are reasonable and valid. Based on the measurements and upscaling approaches, evapotranspiration components were analyzed for a cotton field under mulched drip irrigation. During the two analyzed sub-periods in July and August, evapotranspiration rates were 3.94 and 4.53 m day-1, respectively. The fraction of transpiration to evapotranspiration reached 87.1% before drip irrigation and 82.3% after irrigation. The high fraction of transpiration over evapotranspiration was principally due to the mulched film above the drip pipe, low soil water content in the inter-film zone, well-closed canopy, and high water requirement of the crop.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tian, F.; Hu, H. C.; Hu, H. P.
2013-11-01
A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. To upscale the evapotranspiration from the leaf to the plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. To upscale the evapotranspiration from the plant to the field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationships between leaf area and stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling was slightly higher (18%) than that obtained by sap flow. At the field scale, the estimates of transpiration derived from sap flow with upscaling and eddy covariance shown reasonable consistency during the cotton open boll growth stage when soil evaporation can be neglected. The results indicate that the upscaling approaches are reasonable and valid. Based on the measurements and upscaling approaches, evapotranspiration components were analyzed under mulched drip irrigation. During the two analysis sub-periods in July and August, evapotranspiration rates were 3.94 and 4.53 mm day-1, respectively. The fraction of transpiration to evapotranspiration reached 87.1% before drip irrigation and 82.3% after irrigation. The high fraction of transpiration over evapotranspiration was principally due to the mulched film above drip pipe, low soil water content in the inter-film zone, well-closed canopy, and high water requirement of the crop.
A multi-scale spatial approach to address environmental effects of small hydropower development.
McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C
2015-01-01
Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.
Su, Xianli; Wei, Ping; Li, Han; Liu, Wei; Yan, Yonggao; Li, Peng; Su, Chuqi; Xie, Changjun; Zhao, Wenyu; Zhai, Pengcheng; Zhang, Qingjie; Tang, Xinfeng; Uher, Ctirad
2017-05-01
Considering only about one third of the world's energy consumption is effectively utilized for functional uses, and the remaining is dissipated as waste heat, thermoelectric (TE) materials, which offer a direct and clean thermal-to-electric conversion pathway, have generated a tremendous worldwide interest. The last two decades have witnessed a remarkable development in TE materials. This Review summarizes the efforts devoted to the study of non-equilibrium synthesis of TE materials with multi-scale structures, their transport behavior, and areas of applications. Studies that work towards the ultimate goal of developing highly efficient TE materials possessing multi-scale architectures are highlighted, encompassing the optimization of TE performance via engineering the structures with different dimensional aspects spanning from the atomic and molecular scales, to nanometer sizes, and to the mesoscale. In consideration of the practical applications of high-performance TE materials, the non-equilibrium approaches offer a fast and controllable fabrication of multi-scale microstructures, and their scale up to industrial-size manufacturing is emphasized here. Finally, the design of two integrated power generating TE systems are described-a solar thermoelectric-photovoltaic hybrid system and a vehicle waste heat harvesting system-that represent perhaps the most important applications of thermoelectricity in the energy conversion area. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate
NASA Astrophysics Data System (ADS)
Yang, Jun; Zhu, Shi-Jiao
There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.
Time-lapse photogrammetry in geomorphic studies
NASA Astrophysics Data System (ADS)
Eltner, Anette; Kaiser, Andreas
2017-04-01
Image based approaches to reconstruct the earth surface (Structure from Motion - SfM) are establishing as a standard technology for high resolution topographic data. This is amongst other advantages due to the comparatively ease of use and flexibility of data generation. Furthermore, the increased spatial resolution led to its implementation at a vast range of applications from sub-mm to tens-of-km scale. Almost fully automatic calculation of referenced digital elevation models allows for a significant increase of temporal resolution, as well, potentially up to sub-second scales. Thereby, the setup of a time-lapse multi-camera system is necessary and different aspects need to be considered: The camera array has to be temporary stable or potential movements need to be compensated by temporary stable reference targets/areas. The stability of the internal camera geometry has to be considered due to a usually significantly lower amount of images of the scene, and thus redundancy for parameter estimation, compared to more common SfM applications. Depending on the speed of surface change, synchronisation has to be very accurate. Due to the usual application in the field, changing environmental conditions important for lighting and visual range are also crucial factors to keep in mind. Besides these important considerations much potential is comprised by time-lapse photogrammetry. The integration of multi-sensor systems, e.g. using thermal cameras, enables the potential detection of other processes not visible with RGB-images solely. Furthermore, the implementation of low-cost sensors allows for a significant increase of areal coverage and their setup at locations, where a loss of the system cannot be ruled out. The usage of micro-computers offers smart camera triggering, e.g. acquiring images with increased frequency controlled by a rainfall-triggered sensor. In addition these micro-computers can enable on-site data processing, e.g. recognition of increased surface movement, and thus might be used as warning system in the case of natural hazards. A large variety of applications are suitable with time-lapse photogrammetry, i.e. change detection of all sorts; e.g. volumetric alterations, movement tracking or roughness changes. The multi-camera systems can be used for slope investigations, soil studies, glacier observation, snow cover measurement, volcanic surveillance or plant growth monitoring. A conceptual workflow is introduced highlighting the limits and potentials of time-lapse photogrammetry.
EPA RESEARCH HIGHLIGHTS -- MODELS-3/CMAQ OFFERS COMPREHENSIVE APPROACH TO AIR QUALITY MODELING
Regional and global coordinated efforts are needed to address air quality problems that are growing in complexity and scope. Models-3 CMAQ contains a community multi-scale air quality modeling system for simulating urban to regional scale pollution problems relating to troposphe...
Transition between inverse and direct energy cascades in multiscale optical turbulence.
Malkin, V M; Fisch, N J
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Transition between inverse and direct energy cascades in multiscale optical turbulence
NASA Astrophysics Data System (ADS)
Malkin, V. M.; Fisch, N. J.
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Urban structure analysis of mega city Mexico City using multisensoral remote sensing data
NASA Astrophysics Data System (ADS)
Taubenböck, H.; Esch, T.; Wurm, M.; Thiel, M.; Ullmann, T.; Roth, A.; Schmidt, M.; Mehl, H.; Dech, S.
2008-10-01
Mega city Mexico City is ranked the third largest urban agglomeration to date around the globe. The large extension as well as dynamic urban transformation and sprawl processes lead to a lack of up-to-date and area-wide data and information to measure, monitor, and understand the urban situation. This paper focuses on the capabilities of multisensoral remotely sensed data to provide a broad range of products derived from one scientific field - remote sensing - to support urban managing and planning. Therefore optical data sets from the Landsat and Quickbird sensors as well as radar data from the Shuttle Radar Topography Mission (SRTM) and the TerraSAR-X sensor are utilised. Using the multi-sensoral data sets the analysis are scale-dependent. On the one hand change detection on city level utilising the derived urban footprints enables to monitor and to assess spatiotemporal urban transformation, areal dimension of urban sprawl, its direction, and the built-up density distribution over time. On the other hand, structural characteristics of an urban landscape - the alignment and types of buildings, streets and open spaces - provide insight in the very detailed physical pattern of urban morphology on higher scale. The results show high accuracies of the derived multi-scale products. The multi-scale analysis allows quantifying urban processes and thus leading to an assessment and interpretation of urban trends.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
Systematic review of empowerment measures in health promotion.
Cyril, Sheila; Smith, Ben J; Renzaho, Andre M N
2016-12-01
Empowerment, a multi-level construct comprising individual, community and organizational domains, is a fundamental value and goal in health promotion. While a range of scales have been developed for the measurement of empowerment, the qualities of these have not been rigorously assessed. The aim of this study was to evaluate the measurement properties of quantitative empowerment scales and their applicability in health promotion programs. A systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was done to evaluate empowerment scales across three dimensions: item development, reliability and validity. This was followed by assessment of measurement properties using a ratings scale with criteria addressing an a priori explicit theoretical framework, assessment of content validity, internal consistency and factor analysis to test structural validity. Of the 20 studies included in this review, only 8 (40%) used literature reviews, expert panels and empirical studies to develop scale items and 9 (45%) of studies fulfilled ≥5 criteria on the ratings scale. Two studies (10%) measured community empowerment and one study measured organizational empowerment, the rest (85%) measured individual empowerment. This review highlights important gaps in the measurement of community and organizational domains of empowerment using quantitative scales. A priority for future empowerment research is to investigate and explore approaches such as mixed methods to enable adequate measurement of empowerment across all three domains. This would help health promotion practitioners to effectively measure empowerment as a driver of change and an outcome in health promotion programs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
Voluntary EMG-to-force estimation with a multi-scale physiological muscle model
2013-01-01
Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560
NASA Astrophysics Data System (ADS)
Saverskiy, Aleksandr Y.; Dinca, Dan-Cristian; Rommel, J. Martin
The Intra-Pulse Multi-Energy (IPME) method of material discrimination mitigates main disadvantages of the traditional "interlaced" approach: ambiguity caused by sampling different regions of cargo and reduction of effective scanning speed. A novel concept of creating multi-energy probing pulses using a standing-wave structure allows maintaining a constant energy spectrum while changing the time duration of each sub-pulse and thus enables adaptive cargo inspection. Depending on the cargo density, the dose delivered to the inspected object is optimized for best material discrimination, maximum material penetration, or lowest dose to cargo. A model based on Monte-Carlo simulation and experimental reference points were developed for the optimization of inspection conditions.
Engineering Specifications derived from Science Requirements
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Arnold, William; Bevan, Ryan M.; Smith, W. Scott; Kirk, Charles S.; Postman, Marc
2013-01-01
Advanced Mirror Technology Development (AMTD) is a multi-year effort to systematically mature to TRL-6 the critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable mission can be considered by the 2020 Decadal Review. This technology must enable missions capable of both general astrophysics & ultra-high contrast observations of exoplanets. To accomplish our objective, we use a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND result in a high-performance low-cost low-risk system.
Biointerface dynamics--Multi scale modeling considerations.
Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko
2015-08-01
Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.
Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.
2016-01-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.
Hybrid Particle-Continuum Numerical Methods for Aerospace Applications
2011-01-01
may require kinetic analysis. Another possible option that will enable high-mass, Mars missions is supersonic retro -propulsion [17], where a jet is...exploration missions [15]. 2.3 Plumes Another class of multi-scale ows of interest is rocket exhaust plumes. Ecient and accurate predictions of...atmospheric exhaust plumes at high altitudes are necessary to ensure that the chemical rocket maintains eciency while also assuring that the vehicle heating
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
Large-scale deep learning for robotically gathered imagery for science
NASA Astrophysics Data System (ADS)
Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.
2016-12-01
With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.
Application of Open Source Technologies for Oceanographic Data Analysis
NASA Astrophysics Data System (ADS)
Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.
2015-12-01
NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele
2011-06-01
Recent advances in micro-optical element fabrication using gray scale technology have opened up the opportunity to create simultaneous multi-spectral imaging with fine structure diffractive lenses. This paper will discuss an approach that uses diffractive optical lenses configured in an array (lenslet array) and placed in close proximity to the focal plane array which enables a small compact simultaneous multispectral imaging camera [1]. The lenslet array is designed so that all lenslets have a common focal length with each lenslet tuned for a different wavelength. The number of simultaneous spectral images is determined by the number of individually configured lenslets in the array. The number of spectral images can be increased by a factor of 2 when using it with a dual-band focal plane array (MWIR/LWIR) by exploiting multiple diffraction orders. In addition, modulation of the focal length of the lenslet array with piezoelectric actuation will enable spectral bin fill-in allowing additional spectral coverage while giving up simultaneity. Different lenslet array spectral imaging concept designs are presented in this paper along with a unique concept for prefiltering the radiation focused on the detector. This approach to spectral imaging has applications in the detection of chemical agents in both aerosolized form and as a liquid on a surface. It also can be applied to the detection of weaponized biological agent and IED detection in various forms from manufacturing to deployment and post detection during forensic analysis.
A Multi-Agent Framework for Packet Routing in Wireless Sensor Networks
Ye, Dayon; Zhang, Minji; Yang, Yu
2015-01-01
Wireless sensor networks (WSNs) have been widely investigated in recent years. One of the fundamental issues in WSNs is packet routing, because in many application domains, packets have to be routed from source nodes to destination nodes as soon and as energy efficiently as possible. To address this issue, a large number of routing approaches have been proposed. Although every existing routing approach has advantages, they also have some disadvantages. In this paper, a multi-agent framework is proposed that can assist existing routing approaches to improve their routing performance. This framework enables each sensor node to build a cooperative neighbour set based on past routing experience. Such cooperative neighbours, in turn, can help the sensor to effectively relay packets in the future. This framework is independent of existing routing approaches and can be used to assist many existing routing approaches. Simulation results demonstrate the good performance of this framework in terms of four metrics: average delivery latency, successful delivery ratio, number of live nodes and total sensing coverage. PMID:25928063
A self-consistent first-principle based approach to model carrier mobility in organic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meded, Velimir; Friederich, Pascal; Symalla, Franz
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less
Confirmatory Factor Analysis of the Hewitt-Multidimensional Perfectionism Scale
ERIC Educational Resources Information Center
Barut, Yasar
2015-01-01
Various studies on the conceptual framework of perfectionism construct use Hewitt Multi-dimensional Perfectionism Scale (HMPS), as a basic approach. The measure has a prominent role with respect to the theoretical considerations of perfectionism dimensions. This study aimed to evaluate the psychometric properties of the Turkish version of the…
The temporal relationships between the measurements of terrestrial water storage (TWS), groundwater, and stream discharge were analyzed at three different scales in the Columbia River Basin (CRB) for water years 2004 - 2012. Our nested watershed approach examined the Snake River ...
Weatherill, John; Krause, Stefan; Voyce, Kevin; Drijfhout, Falko; Levy, Amir; Cassidy, Nigel
2014-03-01
Integrated approaches for the identification of pollutant linkages between aquifers and streams are of crucial importance for evaluating the environmental risks posed by industrial contaminants like trichloroethene (TCE). This study presents a systematic, multi-scale approach to characterising groundwater TCE discharge to a 'gaining' UK lowland stream receiving baseflow from a major Permo-Triassic sandstone aquifer. Beginning with a limited number of initial monitoring points, we aim to provide a 'first pass' mechanistic understanding of the plume's fate at the aquifer/stream interface using a novel combination of streambed diffusion samplers, riparian monitoring wells and drive-point mini-piezometers in a spatially nested sampling configuration. Our results indicate the potential discharge zone of the plume to extend along a stream reach of 120 m in length, delineated by a network of 60 in-situ diffusion samplers. Within this section, a 40 m long sub-reach of higher concentration (>10 μg L(-1)) was identified; centred on a meander bend in the floodplain. 25 multi-level mini-piezometers installed to target this down-scaled reach revealed even higher TCE concentrations (20-40 μg L(-1)), significantly above alluvial groundwater samples (<6 μg L(-1)) from 15 riparian monitoring wells. Significant lateral and vertical spatial heterogeneity in TCE concentrations within the top 1m of the streambed was observed with the decimetre-scale vertical resolution provided by multi-level mini-piezometers. It appears that the distribution of fine-grained material in the Holocene deposits of the riparian floodplain and below the channel is exerting significant local-scale geological controls on the location and magnitude of the TCE discharge. Large-scale in-situ biodegradation of the plume was not evident during the monitoring campaigns. However, detections of cis-1,2-dichloroethene and vinyl chloride in discrete sections of the sediment profile indicate that shallow (e.g., <20 cm) TCE transformation may be significant at a local scale in the streambed deposits. Our findings highlight the need for efficient multi-scale monitoring strategies in geologically heterogeneous lowland stream/aquifer systems in order to more adequately quantify the risk to surface water ecological receptors posed by point-source groundwater contaminants like TCE. Copyright © 2013 Elsevier B.V. All rights reserved.
Phase transformations in steels: Processing, microstructure, and performance
Gibbs, Paul J.
2014-04-03
In this study, contemporary steel research is revealing new processing avenues to tailor microstructure and properties that, until recently, were only imaginable. Much of the technological versatility facilitating this development is provided by the understanding and utilization of the complex phase transformation sequences available in ferrous alloys. Today we have the opportunity to explore the diverse phenomena displayed by steels with specialized analytical and experimental tools. Advances in multi-scale characterization techniques provide a fresh perspective into microstructural relationships at the macro- and micro-scale, enabling a fundamental understanding of the role of phase transformations during processing and subsequent deformation.
An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Bradley, C.; Massof, R. W.
2016-11-01
Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.
NASA Astrophysics Data System (ADS)
Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran
A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.
Multi-Hazard Interactions in Guatemala
NASA Astrophysics Data System (ADS)
Gill, Joel; Malamud, Bruce D.
2017-04-01
In this paper, we combine physical and social science approaches to develop a multi-scale regional framework for natural hazard interactions in Guatemala. The identification and characterisation of natural hazard interactions is an important input for comprehensive multi-hazard approaches to disaster risk reduction at a regional level. We use five transdisciplinary evidence sources to organise and populate our framework: (i) internationally-accessible literature; (ii) civil protection bulletins; (iii) field observations; (iv) stakeholder interviews (hazard and civil protection professionals); and (v) stakeholder workshop results. These five evidence sources are synthesised to determine an appropriate natural hazard classification scheme for Guatemala (6 hazard groups, 19 hazard types, and 37 hazard sub-types). For a national spatial extent (Guatemala), we construct and populate a "21×21" hazard interaction matrix, identifying 49 possible interactions between 21 hazard types. For a sub-national spatial extent (Southern Highlands, Guatemala), we construct and populate a "33×33" hazard interaction matrix, identifying 112 possible interactions between 33 hazard sub-types. Evidence sources are also used to constrain anthropogenic processes that could trigger natural hazards in Guatemala, and characterise possible networks of natural hazard interactions (cascades). The outcomes of this approach are among the most comprehensive interaction frameworks for national and sub-national spatial scales in the published literature. These can be used to support disaster risk reduction and civil protection professionals in better understanding natural hazards and potential disasters at a regional scale.
Enhancement of global flood damage assessments using building material based vulnerability curves
NASA Astrophysics Data System (ADS)
Englhardt, Johanna; de Ruiter, Marleen; de Moel, Hans; Aerts, Jeroen
2017-04-01
This study discusses the development of an enhanced approach for flood damage and risk assessments using vulnerability curves that are based on building material information. The approach draws upon common practices in earthquake vulnerability assessments, and is an alternative for land-use or building occupancy approach in flood risk assessment models. The approach is of particular importance for studies where there is a large variation in building material, such as large scale studies or studies in developing countries. A case study of Ethiopia is used to demonstrate the impact of the different methodological approaches on direct damage assessments due to flooding. Generally, flood damage assessments use damage curves for different land-use or occupancy types (i.e. urban or residential and commercial classes). However, these categories do not necessarily relate directly to vulnerability of damage by flood waters. For this, the construction type and building material may be more important, as is used in earthquake risk assessments. For this study, we use building material classification data of the PAGER1 project to define new building material based vulnerability classes for flood damage. This approach will be compared to the widely applied land-use based vulnerability curves such as used by De Moel et al. (2011). The case of Ethiopia demonstrates and compares the feasibility of this novel flood vulnerability method on a country level which holds the potential to be scaled up to a global level. The study shows that flood vulnerability based on building material also allows for better differentiation between flood damage in urban and rural settings, opening doors to better link to poverty studies when such exposure data is available. Furthermore, this new approach paves the road to the enhancement of multi-risk assessments as the method enables the comparison of vulnerability across different natural hazard types that also use material-based vulnerability curves. Finally, this approach allows for more accuracy in estimating losses as a result of direct damages. 1 http://earthquake.usgs.gov/data/pager/
NASA Astrophysics Data System (ADS)
Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.
2015-12-01
Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.
2017-01-01
Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875
NASA Astrophysics Data System (ADS)
Wheatland, Jonathan; Bushby, Andy; Droppo, Ian; Carr, Simon; Spencer, Kate
2015-04-01
Suspended estuarine sediments form flocs that are compositionally complex, fragile and irregularly shaped. The fate and transport of suspended particulate matter (SPM) is determined by the size, shape, density, porosity and stability of these flocs and prediction of SPM transport requires accurate measurements of these three-dimensional (3D) physical properties. However, the multi-scaled nature of flocs in addition to their fragility makes their characterisation in 3D problematic. Correlative microscopy is a strategy involving the spatial registration of information collected at different scales using several imaging modalities. Previously, conventional optical microscopy (COM) and transmission electron microscopy (TEM) have enabled 2-dimensional (2D) floc characterisation at the gross (> 1 µm) and sub-micron scales respectively. Whilst this has proven insightful there remains a critical spatial and dimensional gap preventing the accurate measurement of geometric properties and an understanding of how structures at different scales are related. Within life sciences volumetric imaging techniques such as 3D micro-computed tomography (3D µCT) and focused ion beam scanning electron microscopy [FIB-SEM (or FIB-tomography)] have been combined to characterise materials at the centimetre to micron scale. Combining these techniques with TEM enables an advanced correlative study, allowing material properties across multiple spatial and dimensional scales to be visualised. The aims of this study are; 1) to formulate an advanced correlative imaging strategy combining 3D µCT, FIB-tomography and TEM; 2) to acquire 3D datasets; 3) to produce a model allowing their co-visualisation; 4) to interpret 3D floc structure. To reduce the chance of structural alterations during analysis samples were first 'fixed' in 2.5% glutaraldehyde/2% formaldehyde before being embedding in Durcupan resin. Intermediate steps were implemented to improve contrast and remove pore water, achieved by the addition of heavy metal stains and washing samples in a series of ethanol solutions and acetone. Gross-scale characterisation involved scanning samples using a Nikon Metrology HM X 225 µCT. For micro-scale analysis a working surface was revealed by microtoming the sample. Ultrathin sections were then collected and analysed using a JEOL 1200 Ex II TEM, and FIB-tomography datasets obtained using an FEI Quanta 3D FIB-SEM. Finally, to locate the surface and relate TEM and FIB-tomography datasets to the original floc, samples were rescanned using the µCT. Image processing was initially conducted in ImageJ. Following this datasets were imported into Amira 5.5 where pixel intensity thresholding allowed particle-matrix boundaries to be defined. Using 'landmarks' datasets were then registered to enable their co-visualisation in 3D models. Analysis of registered datasets reveals the complex non-fractal nature of flocs, whose properties span several of orders of magnitude. Primary particles are organised into discrete 'bundles', the arrangement of which directly influences their gross morphology. This strategy, which allows the co-visualisation of spatially registered multi-scale 3D datasets, provides unique insights into the true nature floc which would other have been impossible.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco
2016-04-01
Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Control advances for achieving the ITER baseline scenario on KSTAR
NASA Astrophysics Data System (ADS)
Eidietis, N. W.; Barr, J.; Hahn, S. H.; Humphreys, D. A.; in, Y. K.; Jeon, Y. M.; Lanctot, M. J.; Mueller, D.; Walker, M. L.
2017-10-01
Control methodologies developed to enable successful production of ITER baseline scenario (IBS) plasmas on the superconducting KSTAR tokamak are presented: decoupled vertical control (DVC), real-time feedforward (rtFF) calculation, and multi-input multi-output (MIMO) X-point control. DVC provides fast vertical control with the in-vessel control coils (IVCC) while sharing slow vertical control with the poloidal field (PF) coils to avoid IVCC saturation. rtFF compensates for inaccuracies in offline PF current feedforward programming, allowing reduction or removal of integral gain (and its detrimental phase lag) from the shape controller. Finally, MIMO X-point control provides accurate positioning of the X-point despite low controllability due to the large distance between coils and plasma. Combined, these techniques enabled achievement of IBS parameters (q95 = 3.2, βN = 2) with a scaled ITER shape on KSTAR. n =2 RMP response displays a strong dependence upon this shaping. Work supported by the US DOE under Award DE-SC0010685 and the KSTAR project.
Multi-photon microfabrication of three-dimensional capillary-scale vascular networks
NASA Astrophysics Data System (ADS)
Skylar-Scott, Mark A.; Liu, Man-Chi; Wu, Yuelong; Yanik, Mehmet Fatih
2017-02-01
Biomimetic models of microvasculature could enable assays of complex cellular behavior at the capillary-level, and enable efficient nutrient perfusion for the maintenance of tissues. However, existing three-dimensional printing methods for generating perfusable microvasculature with have insufficient resolution to recapitulate the microscale geometry of capillaries. Here, we present a collection of multiphoton microfabrication methods that enable the production of precise, three-dimensional, branched microvascular networks in collagen. When endothelial cells are added to the channels, they form perfusable lumens with diameters as small as 10 μm. Using a similar photochemistry, we also demonstrate the micropatterning of proteins embedded in microfabricated collagen scaffolds, producing hybrid scaffolds with both defined microarchitecture with integrated gradients of chemical cues. We provide examples for how these hybrid microfabricated scaffolds could be used in angiogenesis and cell homing assays. Finally, we describe a new method for increasing the micropatterning speed by synchronous laser and stage scanning. Using these technologies, we are working towards large-scale (>1 cm), high resolution ( 1 μm) scaffolds with both microarchitecture and embedded protein cues, with applications in three-dimensional assays of cellular behavior.
Biofuel feedstock production in the United States (US) is an emergent environmental nutrient management issue, whose exploration can benefit from a multi-scale and multimedia systems modeling approach that explicitly addresses diverging stakeholder interests. In the present anal...
NASA Astrophysics Data System (ADS)
Rössler, Ole; Keller, Denise; Fischer, Andreas
2016-04-01
In 2011 the Swiss national consortium C2SM providednew climate change scenarios were released in Switzerland that came with a comprehensive data set of temperature and precipitation changes under climate change conditions for every a large network of meteorological stations, and for aggregated as well as regions in across Switzerland. These climate change signals were generated for three emission scenarios and three different future time-periods and designed to be used asbased on a delta change factors approach. This data set proved to be very successful in Switzerland as many different users, researchers, private companies, and societal users were able to use and interpret the climate data set. Thus, a range of applications that are all based on the same climate data set enabled a comparable view on climate change impact in several disciplines. The main limitation and criticism to this data set was the usage of the delta change approach for downscaling as it comes with severe limitations such as underestimatinges changes in extreme values and neglecting changes in variability and changes in temporal sequencesneglecting changes in variability, be it year-to-year or day-to-day, and changes in temporal sequences . lacks a change in the day-to-day-variability. One way to overcome this the latter limitation is the usage of stochastic weather generators in a downscaling context. Weather generators are known to be one suitable downscaling technique, but A common limitation of most weather generators is the absence of spatial consistency rrelation in the generated daily time-series, resulting in an underestimation of areal means over several stations that are often low-biased. refer to one point scale (single-site) and lacks the spatial representation of weather. The latter A realistic representation of the inter-station correlation in the downscaled time-series This is of high particular importance in some impact studies, especially infor any hydrological impact studiesy. Recently, a multi-site weather generator was developed and tested for downscaling purposes over Switzerland. The weather generator is of type Richardson, that is run with spatially correlated random number streams to ensure spatial consistency. As a downside, multi-site weather generators are much more complex to develop, but they are a very promising alternative downscaling technique. A new multi-site-weather generator was developed for Switzerland in a previous study (Keller et al. 2014). In this study, we tested this new multi-site-weather generator against the "standard" delta change derived data in a hydrological impact assessment study that focused on runoff in the meso-scale catchment of the river Thur catchment. Two hydrological models of different complexity were run with the data sets under present (1980-2009) and under future conditions (2070-2099), assuming the SRES A1B emission2070-2100 scenario conditions. Eight meteorological stations were used to interpolate a meteorological field that served as input to calibrate and validate the two hydrological models against runoff. The downscaling intercomparison was done for We applied 10 GCM-RCM combinations simulations of the ENSEMBLES. In case of the weather generator, that allows for multiple synthetic realizations, we generated for which change factors for each station (delta change approach) were available and generated 25 realizations of multi-site weather. with each climate model projection. Results show that the delta change driven data constitutes only one appropriate representation compared to theof a bandwidth of runoff projections yielded by the multi-site weather generator data. Especially oOn average, differences between both the two approaches are small. Low and high runoff Runoff values to both extremes are however better reproduced with the weather generator driven data set. The stochastic representation of multiday rainfall events are considered as the main reason. Hence, tThere is a clear yet small added value to the delta change approach that in turn performs rather well. Although these small but considerable differences might questioning the need to construct a multi-site-weather generator with a huge effort, the potential and possibilities to further develop the multi-site weather generator is undoubted.
Data processing, multi-omic pathway mapping, and metabolite activity analysis using XCMS Online
Forsberg, Erica M; Huan, Tao; Rinehart, Duane; Benton, H Paul; Warth, Benedikt; Hilmers, Brian; Siuzdak, Gary
2018-01-01
Systems biology is the study of complex living organisms, and as such, analysis on a systems-wide scale involves the collection of information-dense data sets that are representative of an entire phenotype. To uncover dynamic biological mechanisms, bioinformatics tools have become essential to facilitating data interpretation in large-scale analyses. Global metabolomics is one such method for performing systems biology, as metabolites represent the downstream functional products of ongoing biological processes. We have developed XCMS Online, a platform that enables online metabolomics data processing and interpretation. A systems biology workflow recently implemented within XCMS Online enables rapid metabolic pathway mapping using raw metabolomics data for investigating dysregulated metabolic processes. In addition, this platform supports integration of multi-omic (such as genomic and proteomic) data to garner further systems-wide mechanistic insight. Here, we provide an in-depth procedure showing how to effectively navigate and use the systems biology workflow within XCMS Online without a priori knowledge of the platform, including uploading liquid chromatography (LCLC)–mass spectrometry (MS) data from metabolite-extracted biological samples, defining the job parameters to identify features, correcting for retention time deviations, conducting statistical analysis of features between sample classes and performing predictive metabolic pathway analysis. Additional multi-omics data can be uploaded and overlaid with previously identified pathways to enhance systems-wide analysis of the observed dysregulations. We also describe unique visualization tools to assist in elucidation of statistically significant dysregulated metabolic pathways. Parameter input takes 5–10 min, depending on user experience; data processing typically takes 1–3 h, and data analysis takes ~30 min. PMID:29494574
3D printing via ambient reactive extrusion
Rios, Orlando; Carter, William G.; Post, Brian K.; ...
2018-03-14
Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less
3D printing via ambient reactive extrusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios, Orlando; Carter, William G.; Post, Brian K.
Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less
NASA Astrophysics Data System (ADS)
Weigand, Maximilian; Kemna, Andreas
2017-02-01
A better understanding of root-soil interactions and associated processes is essential in achieving progress in crop breeding and management, prompting the need for high-resolution and non-destructive characterization methods. To date, such methods are still lacking or restricted by technical constraints, in particular the charactization and monitoring of root growth and function in the field. A promising technique in this respect is electrical impedance tomography (EIT), which utilizes low-frequency (< 1 kHz)- electrical conduction- and polarization properties in an imaging framework. It is well established that cells and cell clusters exhibit an electrical polarization response in alternating electric-current fields due to electrical double layers which form at cell membranes. This double layer is directly related to the electrical surface properties of the membrane, which in turn are influenced by nutrient dynamics (fluxes and concentrations on both sides of the membranes). Therefore, it can be assumed that the electrical polarization properties of roots are inherently related to ion uptake and translocation processes in the root systems. We hereby propose broadband (mHz to hundreds of Hz) multi-frequency EIT as a non-invasive methodological approach for the monitoring and physiological, i.e., functional, characterization of crop root systems. The approach combines the spatial-resolution capability of an imaging method with the diagnostic potential of electrical-impedance spectroscopy. The capability of multi-frequency EIT to characterize and monitor crop root systems was investigated in a rhizotron laboratory experiment, in which the root system of oilseed plants was monitored in a water-filled rhizotron, that is, in a nutrient-deprived environment. We found a low-frequency polarization response of the root system, which enabled the successful delineation of its spatial extension. The magnitude of the overall polarization response decreased along with the physiological decay of the root system due to the stress situation. Spectral polarization parameters, as derived from a pixel-based Debye decomposition analysis of the multi-frequency imaging results, reveal systematic changes in the spatial and spectral electrical response of the root system. In particular, quantified mean relaxation times (of the order of 10 ms) indicate changes in the length scales on which the polarization processes took place in the root system, as a response to the prolonged induced stress situation. Our results demonstrate that broadband EIT is a capable, non-invasive method to image root system extension as well as to monitor changes associated with the root physiological processes. Given its applicability on both laboratory and field scales, our results suggest an enormous potential of the method for the structural and functional imaging of root systems for various applications. This particularly holds for the field scale, where corresponding methods are highly desired but to date are lacking.
Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner
2013-06-01
The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Plasma jets in the near-Earth's magnetotail (Julius Bartels Medal Lecture)
NASA Astrophysics Data System (ADS)
Nakamura, Rumi
2014-05-01
The Earth's magnetosphere is formed as a consequence of the interaction between the magnetized solar wind and the terrestrial magnetic field. While the large-scale and average (>hours) properties of the Earth's magnetotail current sheet can be well described by overall solar wind-magnetosphere interaction, the most dramatic energy conversion process takes place in an explosive manner involving transient (up to several minutes) and localized (up to a few RE) phenomena in the plasma sheet/current sheet regions. One of the most clear observables of such processes are the localized and transient plasma jets called Bursty bulk flows (BBF), embedding velocity peaks of 1-min duration, which are called flow bursts. This talk is a review of the current understanding of these plasma jets by highlighting the results from multi-spacecraft observations by the Cluster and THEMIS spacecraft. The first four-spacecraft mission Cluster crossed the near-Earth plasma sheet with inter-spacecraft distance of about 250 km to 10000 km, ideal for studying local structures of the flow bursts. The five-spacecraft THEMIS mission , separated by larger distances , succeeded to monitor the large-scale evolution of the fast flows from the mid-tail to the inner magnetosphere. Multi-point observations of BBFS have established the importance of measuring local gradients of the fields and the plasma to understand the BBF structures such as the spatial scales and 3D structure of localized Earthward convecting flux tubes. Among others the magnetic field disturbance forming at the front of BBF, called dipolarization front (DF), has been intensively studied. From the propagation properties of DF relative to the flows and by comparing with ionospheric data, the evolution of the fast flows in terms of magnetosphere-ionospheric coupling through field-aligned currents are established. An important aspect of BBF is the interaction of the Earthward plasma jets and the Earth's dipole field. Multi-point observations combined with ground-based observations enabled to resolve how the BBFs are braked , diverted, or bounced back at the high-pressure gradient region. The multi-point capabilities in space enabled to study the BBF structure as well as large-scale evolution of BBFs. These processes are also universal processes in space plasmas and are, for example, associated with the reconnection process during the solar flares or leading to auroral phenomena at different planets.
Negotiating designs of multi-purpose reservoir systems in international basins
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2016-04-01
Given increasing agricultural and energy demands, coordinated management of multi-reservoir systems could help increase production without further stressing available water resources. However, regional or international disputes about water-use rights pose a challenge to efficient expansion and management of many large reservoir systems. Even when projects are likely to benefit all stakeholders, agreeing on the design, operation, financing, and benefit sharing can be challenging. This is due to the difficulty of considering multiple stakeholder interests in the design of projects and understanding the benefit trade-offs that designs imply. Incommensurate performance metrics, incomplete knowledge on system requirements, lack of objectivity in managing conflict and difficulty to communicate complex issue exacerbate the problem. This work proposes a multi-step hybrid multi-objective optimization and multi-criteria ranking approach for supporting negotiation in water resource systems. The approach uses many-objective optimization to generate alternative efficient designs and reveal the trade-offs between conflicting objectives. This enables informed elicitation of criteria weights for further multi-criteria ranking of alternatives. An ideal design would be ranked as best by all stakeholders. Resource-sharing mechanisms such as power-trade and/or cost sharing may help competing stakeholders arrive at designs acceptable to all. Many-objective optimization helps suggests efficient designs (reservoir site, its storage size and operating rule) and coordination levels considering the perspectives of multiple stakeholders simultaneously. We apply the proposed approach to a proof-of-concept study of the expansion of the Blue Nile transboundary reservoir system.
Distributing Planning and Control for Teams of Cooperating Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
2004-07-19
This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of our control approaches for distributed planning and cooperation in multi-robot teams. The primary objectives of this researchmore » project were to: (1) Develop autonomous control technologies to enable multiple vehicles to work together cooperatively, (2) Provide the foundational capabilities for a human operator to exercise oversight and guidance during the multi-vehicle task execution, and (3) Integrate these capabilities to the ALLIANCE-based autonomous control approach for multi-robot teams. These objectives have been successfully met with the results implemented and demonstrated in a near real-time multi-vehicle simulation of up to four vehicles performing mission-relevant tasks.« less
Systematic methods for defining coarse-grained maps in large biomolecules.
Zhang, Zhiyong
2015-01-01
Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.
NASA Technical Reports Server (NTRS)
Brown, Christopher A.
1993-01-01
The approach of the project is to base the design of multi-function, reflective topographies on the theory that topographically dependent phenomena react with surfaces and interfaces at certain scales. The first phase of the project emphasizes the development of methods for understanding the sizes of topographic features which influence reflectivity. Subsequent phases, if necessary, will address the scales of interaction for adhesion and manufacturing processes. A simulation of the interaction of electromagnetic radiation, or light, with a reflective surface is performed using specialized software. Reflectivity of the surface as a function of scale is evaluated and the results from the simulation are compared with reflectivity measurements made on multi-function, reflective surfaces.
Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B.
2003-12-01
Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.
Frasch, Martin G; Lobmaier, Silvia M; Stampalija, Tamara; Desplats, Paula; Pallarés, María Eugenia; Pastor, Verónica; Brocco, Marcela A; Wu, Hau-Tieng; Schulkin, Jay; Herry, Christophe L; Seely, Andrew J E; Metz, Gerlinde A S; Louzoun, Yoram; Antonelli, Marta C
2018-05-30
Prenatal stress (PS) impacts early postnatal behavioural and cognitive development. This process of 'fetal programming' is mediated by the effects of the prenatal experience on the developing hypothalamic-pituitary-adrenal (HPA) axis and autonomic nervous system (ANS). We derive a multi-scale multi-species approach to devising preclinical and clinical studies to identify early non-invasively available pre- and postnatal biomarkers of PS. The multiple scales include brain epigenome, metabolome, microbiome and the ANS activity gauged via an array of advanced non-invasively obtainable properties of fetal heart rate fluctuations. The proposed framework has the potential to reveal mechanistic links between maternal stress during pregnancy and changes across these physiological scales. Such biomarkers may hence be useful as early and non-invasive predictors of neurodevelopmental trajectories influenced by the PS as well as follow-up indicators of success of therapeutic interventions to correct such altered neurodevelopmental trajectories. PS studies must be conducted on multiple scales derived from concerted observations in multiple animal models and human cohorts performed in an interactive and iterative manner and deploying machine learning for data synthesis, identification and validation of the best non-invasive detection and follow-up biomarkers, a prerequisite for designing effective therapeutic interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis
2017-09-22
In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Modeling Drug- and Chemical-Induced Hepatotoxicity with Systems Biology Approaches
Bhattacharya, Sudin; Shoda, Lisl K.M.; Zhang, Qiang; Woods, Courtney G.; Howell, Brett A.; Siler, Scott Q.; Woodhead, Jeffrey L.; Yang, Yuching; McMullen, Patrick; Watkins, Paul B.; Andersen, Melvin E.
2012-01-01
We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of “toxicity pathways” is described in the context of the 2007 US National Academies of Science report, “Toxicity testing in the 21st Century: A Vision and A Strategy.” Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity) – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular “virtual tissue” model of the liver lobule that combines molecular circuits in individual hepatocytes with cell–cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the aryl hydrocarbon receptor toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsym™) to understand drug-induced liver injury (DILI), the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales. PMID:23248599
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dombroski, M; Melius, C; Edmunds, T
2008-09-24
This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... appropriate prevention and treatment approaches to methamphetamine abuse and suicide in a community driven... implemented and executed via an integrated and comprehensive approach through collaborations across multi... experience providing education and outreach on a national scale. This limitation ensures that the awardee...
A Mission Planning Approach for Precision Farming Systems Based on Multi-Objective Optimization.
Zhai, Zhaoyu; Martínez Ortega, José-Fernán; Lucas Martínez, Néstor; Rodríguez-Molina, Jesús
2018-06-02
As the demand for food grows continuously, intelligent agriculture has drawn much attention due to its capability of producing great quantities of food efficiently. The main purpose of intelligent agriculture is to plan agricultural missions properly and use limited resources reasonably with minor human intervention. This paper proposes a Precision Farming System (PFS) as a Multi-Agent System (MAS). Components of PFS are treated as agents with different functionalities. These agents could form several coalitions to complete the complex agricultural missions cooperatively. In PFS, mission planning should consider several criteria, like expected benefit, energy consumption or equipment loss. Hence, mission planning could be treated as a Multi-objective Optimization Problem (MOP). In order to solve MOP, an improved algorithm, MP-PSOGA, is proposed, taking advantages of the Genetic Algorithms and Particle Swarm Optimization. A simulation, called precise pesticide spraying mission, is performed to verify the feasibility of the proposed approach. Simulation results illustrate that the proposed approach works properly. This approach enables the PFS to plan missions and allocate scarce resources efficiently. The theoretical analysis and simulation is a good foundation for the future study. Once the proposed approach is applied to a real scenario, it is expected to bring significant economic improvement.
NASA Astrophysics Data System (ADS)
Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana
2018-01-01
This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.
Wardrop, N. A.; Jochem, W. C.; Bird, T. J.; Chamberlain, H. R.; Clarke, D.; Kerr, D.; Bengtsson, L.; Juran, S.; Seaman, V.; Tatem, A. J.
2018-01-01
Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. PMID:29555739
Adaptation of ATI-R Scale to Turkish Samples: Validity and Reliability Analyses
ERIC Educational Resources Information Center
Tezci, Erdogan
2017-01-01
Teachers' teaching approaches have become an important issue in the search of quality in education and teaching because of their effect on students' learning. Improvements in teachers' knowledge and awareness of their own teaching approaches enable them to adopt teaching process in accordance with their students' learning styles. The Approaches to…
NASA Astrophysics Data System (ADS)
Liu, Yushi; Poh, Hee Joo
2014-11-01
The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.
Circadian clock and cardiac vulnerability: A time stamp on multi-scale neuroautonomic regulation
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.
2005-03-01
Cardiovascular vulnerability displays a 24-hour pattern with a peak between 9AM and 11AM. This daily pattern in cardiac risk is traditionally attributed to external factors including activity levels and sleep-wake cycles. However,influences from the endogenous circadian pacemaker independent from behaviors may also affect cardiac control. We investigate heartbeat dynamics in healthy subjects recorded throughout a 10-day protocol wherein the sleep/wake and behavior cycles are desynchronized from the endogenous circadian cycle,enabling assessment of circadian factors while controlling for behavior-related factors. We demonstrate that the scaling exponent characterizing temporal correlations in heartbeat dynamics over multiple time scales does exhibit a significant circadian rhythm with a sharp peak at the circadian phase corresponding to the period 9-11AM, and that this rhythm is independent from scheduled behaviors and mean heart rate. Our findings of strong circadian rhythms in the multi-scale heartbeat dynamics of healthy young subjects indicate that the underlying mechanism of cardiac regulation is strongly influenced by the endogenous circadian pacemaker. A similar circadian effect in vulnerable individuals with underlying cardiovascular disease would contribute to the morning peak of adverse cardiac events observed in epidemiological studies.
NASA Astrophysics Data System (ADS)
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
2016-06-22
this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
Ontology-based approach for in vivo human connectomics: the medial Brodmann area 6 case study
Moreau, Tristan; Gibaud, Bernard
2015-01-01
Different non-invasive neuroimaging modalities and multi-level analysis of human connectomics datasets yield a great amount of heterogeneous data which are hard to integrate into an unified representation. Biomedical ontologies can provide a suitable integrative framework for domain knowledge as well as a tool to facilitate information retrieval, data sharing and data comparisons across scales, modalities and species. Especially, it is urgently needed to fill the gap between neurobiology and in vivo human connectomics in order to better take into account the reality highlighted in Magnetic Resonance Imaging (MRI) and relate it to existing brain knowledge. The aim of this study was to create a neuroanatomical ontology, called “Human Connectomics Ontology” (HCO), in order to represent macroscopic gray matter regions connected with fiber bundles assessed by diffusion tractography and to annotate MRI connectomics datasets acquired in the living human brain. First a neuroanatomical “view” called NEURO-DL-FMA was extracted from the reference ontology Foundational Model of Anatomy (FMA) in order to construct a gross anatomy ontology of the brain. HCO extends NEURO-DL-FMA by introducing entities (such as “MR_Node” and “MR_Route”) and object properties (such as “tracto_connects”) pertaining to MR connectivity. The Web Ontology Language Description Logics (OWL DL) formalism was used in order to enable reasoning with common reasoning engines. Moreover, an experimental work was achieved in order to demonstrate how the HCO could be effectively used to address complex queries concerning in vivo MRI connectomics datasets. Indeed, neuroimaging datasets of five healthy subjects were annotated with terms of the HCO and a multi-level analysis of the connectivity patterns assessed by diffusion tractography of the right medial Brodmann Area 6 was achieved using a set of queries. This approach can facilitate comparison of data across scales, modalities and species. PMID:25914640
Spatial vision processes: From the optical image to the symbolic structures of contour information
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1988-01-01
The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.
Highly efficient spatial data filtering in parallel using the opensource library CPPPO
NASA Astrophysics Data System (ADS)
Municchi, Federico; Goniva, Christoph; Radl, Stefan
2016-10-01
CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles.
Research highlights: Microtechnologies for engineering the cellular environment.
Tseng, Peter; Kunze, Anja; Kittur, Harsha; Di Carlo, Dino
2014-04-07
In this issue we highlight recent microtechnology-enabled approaches to control the physical and biomolecular environment around cells: (1) developing micropatterned surfaces to quantify cell affinity choices between two adhesive patterns, (2) controlling topographical cues to align cells and improve reprogramming to a pluripotent state, and (3) controlling gradients of biomolecules to maintain pluripotency in embryonic stem cells. Quantitative readouts of cell-surface affinity in environments with several cues should open up avenues in tissue engineering where self-assembly of complex multi-cellular structures is possible by precisely engineering relative adhesive cues in three dimensional constructs. Methods of simple and local epigenetic modification of chromatin structure with microtopography and biomolecular gradients should also be of use in regenerative medicine, as well as in high-throughput quantitative analysis of external signals that impact and can be used to control cells. Overall, approaches to engineer the cellular environment will continue to be an area of further growth in the microfluidic and lab on a chip community, as the scale of the technologies seamlessly matches that of biological systems. However, because of regulations and other complexities with tissue engineered therapies, these micro-engineering approaches will likely first impact organ-on-a-chip technologies that are poised to improve drug discovery pipelines.
[Robotic fundoplication for gastro-oesophageal reflux disease].
Costi, Renato; Himpens, Jacques; Iusco, Domenico; Sarli, Leopoldo; Violi, Vincenzo; Roncoroni, Luigi; Cadière, Guy Bernard
2004-01-01
Presented as a possible "second" revolution in general surgery after the introduction of laparoscopy during the last few years, the robotic approach to mini-invasive surgery has not yet witnessed wide, large-scale diffusion among general surgeons and is still considered an "experimental approach". In general surgery, the laparoscopic treatment of gastrooesophageal reflux is the second most frequently performed robot-assisted procedure after cholecystectomy. A review of the literature and an analysis of the costs may allow a preliminary evaluation of the pros and cons of robotic fundoplication, which may then be applicable to other general surgery procedures. Eleven articles report 91 cases of robotic fundoplication (75 Nissen, 9 Thal, 7 Toupet). To date, there is no evidence of benefit in terms of duration of surgery, rate of complications and hospital stay. Moreover, robotic fundoplication is more expensive than the traditional laparoscopic approach (the additional cost per procedure due to robotics is 1,882.97 euros). Only further technological upgrades and advances will make the use of robotics competitive in general surgery. The development of multi-functional instruments and of tactile feedback at the console, enlargement of the three-dimensional laparoscopic view and specific "team" training will enable the use of robotic surgery to be extended to increasingly difficult procedures and to non-specialised environments.
Practical system for the generation of pulsed quantum frequency combs.
Roztocki, Piotr; Kues, Michael; Reimer, Christian; Wetzel, Benjamin; Sciara, Stefania; Zhang, Yanbing; Cino, Alfonso; Little, Brent E; Chu, Sai T; Moss, David J; Morandotti, Roberto
2017-08-07
The on-chip generation of large and complex optical quantum states will enable low-cost and accessible advances for quantum technologies, such as secure communications and quantum computation. Integrated frequency combs are on-chip light sources with a broad spectrum of evenly-spaced frequency modes, commonly generated by four-wave mixing in optically-excited nonlinear micro-cavities, whose recent use for quantum state generation has provided a solution for scalable and multi-mode quantum light sources. Pulsed quantum frequency combs are of particular interest, since they allow the generation of single-frequency-mode photons, required for scaling state complexity towards, e.g., multi-photon states, and for quantum information applications. However, generation schemes for such pulsed combs have, to date, relied on micro-cavity excitation via lasers external to the sources, being neither versatile nor power-efficient, and impractical for scalable realizations of quantum technologies. Here, we introduce an actively-modulated, nested-cavity configuration that exploits the resonance pass-band characteristic of the micro-cavity to enable a mode-locked and energy-efficient excitation. We demonstrate that the scheme allows the generation of high-purity photons at large coincidence-to-accidental ratios (CAR). Furthermore, by increasing the repetition rate of the excitation field via harmonic mode-locking (i.e. driving the cavity modulation at harmonics of the fundamental repetition rate), we managed to increase the pair production rates (i.e. source efficiency), while maintaining a high CAR and photon purity. Our approach represents a significant step towards the realization of fully on-chip, stable, and versatile sources of pulsed quantum frequency combs, crucial for the development of accessible quantum technologies.
High-Speed Real-Time Resting-State fMRI Using Multi-Slab Echo-Volumar Imaging
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Zhang, Tongsheng; Hummatov, Ruslan; Akhtari, Massoud; Chohan, Muhammad; Fisch, Bruce; Yonas, Howard
2013-01-01
We recently demonstrated that ultra-high-speed real-time fMRI using multi-slab echo-volumar imaging (MEVI) significantly increases sensitivity for mapping task-related activation and resting-state networks (RSNs) compared to echo-planar imaging (Posse et al., 2012). In the present study we characterize the sensitivity of MEVI for mapping RSN connectivity dynamics, comparing independent component analysis (ICA) and a novel seed-based connectivity analysis (SBCA) that combines sliding-window correlation analysis with meta-statistics. This SBCA approach is shown to minimize the effects of confounds, such as movement, and CSF and white matter signal changes, and enables real-time monitoring of RSN dynamics at time scales of tens of seconds. We demonstrate highly sensitive mapping of eloquent cortex in the vicinity of brain tumors and arterio-venous malformations, and detection of abnormal resting-state connectivity in epilepsy. In patients with motor impairment, resting-state fMRI provided focal localization of sensorimotor cortex compared with more diffuse activation in task-based fMRI. The fast acquisition speed of MEVI enabled segregation of cardiac-related signal pulsation using ICA, which revealed distinct regional differences in pulsation amplitude and waveform, elevated signal pulsation in patients with arterio-venous malformations and a trend toward reduced pulsatility in gray matter of patients compared with healthy controls. Mapping cardiac pulsation in cortical gray matter may carry important functional information that distinguishes healthy from diseased tissue vasculature. This novel fMRI methodology is particularly promising for mapping eloquent cortex in patients with neurological disease, having variable degree of cooperation in task-based fMRI. In conclusion, ultra-high-real-time speed fMRI enhances the sensitivity of mapping the dynamics of resting-state connectivity and cerebro-vascular pulsatility for clinical and neuroscience research applications. PMID:23986677
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Brain vascular image segmentation based on fuzzy local information C-means clustering
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie
2017-02-01
Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.
Reconstruction of late Holocene climate based on tree growth and mechanistic hierarchical models
Tipton, John; Hooten, Mevin B.; Pederson, Neil; Tingley, Martin; Bishop, Daniel
2016-01-01
Reconstruction of pre-instrumental, late Holocene climate is important for understanding how climate has changed in the past and how climate might change in the future. Statistical prediction of paleoclimate from tree ring widths is challenging because tree ring widths are a one-dimensional summary of annual growth that represents a multi-dimensional set of climatic and biotic influences. We develop a Bayesian hierarchical framework using a nonlinear, biologically motivated tree ring growth model to jointly reconstruct temperature and precipitation in the Hudson Valley, New York. Using a common growth function to describe the response of a tree to climate, we allow for species-specific parameterizations of the growth response. To enable predictive backcasts, we model the climate variables with a vector autoregressive process on an annual timescale coupled with a multivariate conditional autoregressive process that accounts for temporal correlation and cross-correlation between temperature and precipitation on a monthly scale. Our multi-scale temporal model allows for flexibility in the climate response through time at different temporal scales and predicts reasonable climate scenarios given tree ring width data.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.
Three dimensional fabrication at small size scales
Leong, Timothy G.; Zarafshar, Aasiyeh M.; Gracias, David H.
2010-01-01
Despite the fact that we live in a three-dimensional (3D) world and macroscale engineering is 3D, conventional sub-mm scale engineering is inherently two-dimensional (2D). New fabrication and patterning strategies are needed to enable truly three-dimensionally-engineered structures at small size scales. Here, we review strategies that have been developed over the last two decades that seek to enable such millimeter to nanoscale 3D fabrication and patterning. A focus of this review is the strategy of self-assembly, specifically in a biologically inspired, more deterministic form known as self-folding. Self-folding methods can leverage the strengths of lithography to enable the construction of precisely patterned 3D structures and “smart” components. This self-assembling approach is compared with other 3D fabrication paradigms, and its advantages and disadvantages are discussed. PMID:20349446
A Cloud-based Approach to Medical NLP
Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.
2011-01-01
Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072