Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation
Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan
2010-01-01
Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939
Multi-scale modelling of rubber-like materials and soft tissues: an appraisal
Puglisi, G.
2016-01-01
We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai
2015-12-01
In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less
Multiscale modeling and simulation of brain blood flow
NASA Astrophysics Data System (ADS)
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2016-02-01
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.
Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach
NASA Technical Reports Server (NTRS)
Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.
2003-01-01
"Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.
Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems
Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2014-01-01
Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.
Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah
2009-01-01
Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Plant trait detection with multi-scale spectrometry
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.
2017-12-01
Proximal and remote sensing using imaging spectrometry offers new opportunities for detecting plant traits, with benefits for phenotyping, productivity estimation, stress detection, and biodiversity studies. Using proximal and airborne spectrometry, we evaluated variation in plant optical properties at various spatial and spectral scales with the goal of identifying optimal scales for distinguishing plant traits related to photosynthetic function. Using directed approaches based on physiological vegetation indices, and statistical approaches based on spectral information content, we explored alternate ways of distinguishing plant traits with imaging spectrometry. With both leaf traits and canopy structure contributing to the signals, results exhibit a strong scale dependence. Our results demonstrate the benefits of multi-scale experimental approaches within a clear conceptual framework when applying remote sensing methods to plant trait detection for phenotyping, productivity, and biodiversity studies.
NASA Astrophysics Data System (ADS)
Lee, Chang-Chun; Huang, Pei-Chen
2018-05-01
The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.
Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.
Gustafsson, Lena; Perhans, Karin
2010-12-01
A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Development of a Renormalization Group Approach to Multi-Scale Plasma Physics Computation
2012-03-28
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a . REPORT...code) 29-12-2008 Final Technical Report From 29-12-2008 To 16-95-2011 (STTR PHASE II) DEVELOPMENT OF A RENORMALIZATION GROUP APPROACH TO MULTI-SCALE
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
A multi-frequency receiver function inversion approach for crustal velocity structure
NASA Astrophysics Data System (ADS)
Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian
2017-05-01
In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.
Application Perspective of 2D+SCALE Dimension
NASA Astrophysics Data System (ADS)
Karim, H.; Rahman, A. Abdul
2016-09-01
Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping
2017-03-01
A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
ERIC Educational Resources Information Center
Gotwals, John K.; Dunn, John G. H.
2009-01-01
This article presents a chronology of three empirical studies that outline the measurement process by which two new subscales ("Doubts about Actions" and "Organization") were developed and integrated into a revised version of Dunn, Causgrove Dunn, and Syrotuik's (2002) "Sport Multidimensional Perfectionism Scale"…
A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.
Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien
2017-01-01
Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.
Multi-scale computational modeling of developmental biology.
Setty, Yaki
2012-08-01
Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.
A multi-scale Q1/P0 approach to langrangian shock hydrodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shashkov, Mikhail; Love, Edward; Scovazzi, Guglielmo
A new multi-scale, stabilized method for Q1/P0 finite element computations of Lagrangian shock hydrodynamics is presented. Instabilities (of hourglass type) are controlled by a stabilizing operator derived using the variational multi-scale analysis paradigm. The resulting stabilizing term takes the form of a pressure correction. With respect to currently implemented hourglass control approaches, the novelty of the method resides in its residual-based character. The stabilizing residual has a definite physical meaning, since it embeds a discrete form of the Clausius-Duhem inequality. Effectively, the proposed stabilization samples and acts to counter the production of entropy due to numerical instabilities. The proposed techniquemore » is applicable to materials with no shear strength, for which there exists a caloric equation of state. The stabilization operator is incorporated into a mid-point, predictor/multi-corrector time integration algorithm, which conserves mass, momentum and total energy. Encouraging numerical results in the context of compressible gas dynamics confirm the potential of the method.« less
A Matter of Scale: Multi-Scale Ethnographic Research on Education in the United States
ERIC Educational Resources Information Center
Eisenhart, Margaret
2017-01-01
In recent years, cultural anthropologists conducting educational ethnographies in the US have pursued some new methodological approaches. These new approaches can be attributed to advances in cultural theory, evolving norms of research practice, and the affordances of new technologies. In this article, I review three such approaches under the…
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Multi-scale modelling of elastic moduli of trabecular bone
Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz
2012-01-01
We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
A multi-scale framework to link remotely sensed metrics with socioeconomic data
NASA Astrophysics Data System (ADS)
Watmough, Gary; Svenning, Jens-Christian; Palm, Cheryl; Sullivan, Clare; Danylo, Olha; McCallum, Ian
2017-04-01
There is increasing interest in the use of remotely sensed satellite data for estimating human poverty as it can bridge data gaps that prevent fine scale monitoring of development goals across large areas. The ways in which metrics derived from satellite imagery are linked with socioeconomic data are crucial for accurate estimation of poverty. Yet, to date, approaches in the literature linking satellite metrics with socioeconomic data are poorly characterized. Typically, approaches use a GIS approach such as circular buffer zones around a village or household or an administrative boundary such as a district or census enumeration area. These polygons are then used to extract environmental data from satellite imagery and related to the socioeconomic data in statistical analyses. The use of a single polygon to link environment and socioeconomic data is inappropriate in coupled human-natural systems as processes operate over multiple scales. Human interactions with the environment occur at multiple levels from individual (household) access to agricultural plots adjacent to homes, to communal access to common pool resources (CPR) such as forests at the village level. Here, we present a multi-scale framework that explicitly considers how people use the landscape. The framework is presented along with a case study example in Kenya. The multi-scale approach could enhance the modelling of human-environment interactions which will have important consequences for monitoring the sustainable development goals for human livelihoods and biodiversity conservation.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
NASA Astrophysics Data System (ADS)
Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.
2008-12-01
Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.
Chen, Hai; Liang, Xiaoying; Li, Rui
2013-01-01
Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-01-01
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-04-21
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.
A Multi-Scale Comparative Study of Shape and Sprawl in Metropolitan Regions of the United States
ERIC Educational Resources Information Center
Kugler, Tracy A.
2012-01-01
This dissertation constitutes a multi-scale quantitative and qualitative investigation of patterns of urban development in metropolitan regions of the United States. This work has generated a comprehensive data set on spatial patterns of metropolitan development in the U.S. and an approach to the study of such patterns that can be used to further…
Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth
2014-01-01
A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...
Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species
Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin
1999-01-01
The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
A scale-based approach to interdisciplinary research and expertise in sports.
Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles
2017-02-01
After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.
Biocultural approaches to well-being and sustainability indicators across scales
Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter
2017-01-01
Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...
Multi-scale symbolic transfer entropy analysis of EEG
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-10-01
From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
Multiscale modeling of a low magnetostrictive Fe-27wt%Co-0.5wt%Cr alloy
NASA Astrophysics Data System (ADS)
Savary, M.; Hubert, O.; Helbert, A. L.; Baudin, T.; Batonnet, R.; Waeckerlé, T.
2018-05-01
The present paper deals with the improvement of a multi-scale approach describing the magneto-mechanical coupling of Fe-27wt%Co-0.5wt%Cr alloy. The magnetostriction behavior is demonstrated as very different (low magnetostriction vs. high magnetostriction) when this material is submitted to two different final annealing conditions after cold rolling. The numerical data obtained from a multi-scale approach are in accordance with experimental data corresponding to the high magnetostriction level material. A bi-domain structure hypothesis is employed to explain the low magnetostriction behavior, in accordance with the effect of an applied tensile stress. A modification of the multiscale approach is proposed to match this result.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”
Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; ...
2015-04-17
Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.
Towards an eco-phylogenetic framework for infectious disease ecology.
Fountain-Jones, Nicholas M; Pearse, William D; Escobar, Luis E; Alba-Casals, Ana; Carver, Scott; Davies, T Jonathan; Kraberger, Simona; Papeş, Monica; Vandegrift, Kurt; Worsley-Tonks, Katherine; Craft, Meggan E
2018-05-01
Identifying patterns and drivers of infectious disease dynamics across multiple scales is a fundamental challenge for modern science. There is growing awareness that it is necessary to incorporate multi-host and/or multi-parasite interactions to understand and predict current and future disease threats better, and new tools are needed to help address this task. Eco-phylogenetics (phylogenetic community ecology) provides one avenue for exploring multi-host multi-parasite systems, yet the incorporation of eco-phylogenetic concepts and methods into studies of host pathogen dynamics has lagged behind. Eco-phylogenetics is a transformative approach that uses evolutionary history to infer present-day dynamics. Here, we present an eco-phylogenetic framework to reveal insights into parasite communities and infectious disease dynamics across spatial and temporal scales. We illustrate how eco-phylogenetic methods can help untangle the mechanisms of host-parasite dynamics from individual (e.g. co-infection) to landscape scales (e.g. parasite/host community structure). An improved ecological understanding of multi-host and multi-pathogen dynamics across scales will increase our ability to predict disease threats. © 2017 Cambridge Philosophical Society.
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Joyce, Steven; Hartley, Lee; Applegate, David; Hoek, Jaap; Jackson, Peter
2014-09-01
Forsmark in Sweden has been proposed as the site of a geological repository for spent high-level nuclear fuel, to be located at a depth of approximately 470 m in fractured crystalline rock. The safety assessment for the repository has required a multi-disciplinary approach to evaluate the impact of hydrogeological and hydrogeochemical conditions close to the repository and in a wider regional context. Assessing the consequences of potential radionuclide releases requires quantitative site-specific information concerning the details of groundwater flow on the scale of individual waste canister locations (1-10 m) as well as details of groundwater flow and composition on the scale of groundwater pathways between the facility and the surface (500 m to 5 km). The purpose of this article is to provide an illustration of multi-scale modeling techniques and the results obtained when combining aspects of local-scale flows in fractures around a potential contaminant source with regional-scale groundwater flow and transport subject to natural evolution of the system. The approach set out is novel, as it incorporates both different scales of model and different levels of detail, combining discrete fracture network and equivalent continuous porous medium representations of fractured bedrock.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Techniques for Mapping Synthetic Aperture Radar Processing Algorithms to Multi-GPU Clusters
2012-12-01
Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output...Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output
Probabilistic, meso-scale flood loss modelling
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models
Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko
2012-01-01
Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111
Multi-Scale Approach to Understanding Source-Sink Dynamics of Amphibians
2015-12-01
spotted salamander, A. maculatum) at Fort Leonard Wood (FLW), Missouri. We used a multi-faceted approach in which we combined ecological , genetic...spotted salamander, A. maculatum) at Fort Leonard Wood , Missouri through a combination of intensive ecological field studies, genetic analyses, and...spatial demographic networks to identify optimal locations for wetland construction and restoration. Ecological Applications. Walls, S. C., Ball, L. C
Multi-level molecular modelling for plasma medicine
NASA Astrophysics Data System (ADS)
Bogaerts, Annemie; Khosravian, Narjes; Van der Paal, Jonas; Verlackt, Christof C. W.; Yusupov, Maksudbek; Kamaraj, Balu; Neyts, Erik C.
2016-02-01
Modelling at the molecular or atomic scale can be very useful for obtaining a better insight in plasma medicine. This paper gives an overview of different atomic/molecular scale modelling approaches that can be used to study the direct interaction of plasma species with biomolecules or the consequences of these interactions for the biomolecules on a somewhat longer time-scale. These approaches include density functional theory (DFT), density functional based tight binding (DFTB), classical reactive and non-reactive molecular dynamics (MD) and united-atom or coarse-grained MD, as well as hybrid quantum mechanics/molecular mechanics (QM/MM) methods. Specific examples will be given for three important types of biomolecules, present in human cells, i.e. proteins, DNA and phospholipids found in the cell membrane. The results show that each of these modelling approaches has its specific strengths and limitations, and is particularly useful for certain applications. A multi-level approach is therefore most suitable for obtaining a global picture of the plasma-biomolecule interactions.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques
2013-11-15
Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
A continuous monitoring of daily evapotranspiration (ET) at field scale can be achieved by combining thermal infrared remote sensing data information from multiple satellite platforms. Here, an integrated approach to field scale ET mapping is described, combining multi-scale surface energy balance e...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...
2017-10-28
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
Patterns of Risk Using an Integrated Spatial Multi-Hazard Model (PRISM Model)
Multi-hazard risk assessment has long centered on small scale needs, whereby a single community or group of communities’ exposures are assessed to determine potential mitigation strategies. While this approach has advanced the understanding of hazard interactions, it is li...
Towards a macrosystems approach for successful coastal management
Managing coastal resources for resiliency and sustainability often requires integrative, multi-disciplinary approaches across varying spatial and temporal scales to engage stakeholders and inform decision-makers. We discuss case studies integrating wetland ecology, economics, soc...
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida
2016-09-01
Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.
Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets
NASA Astrophysics Data System (ADS)
Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo
2017-11-01
Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.
Multiscale recurrence analysis of spatio-temporal data
NASA Astrophysics Data System (ADS)
Riedl, M.; Marwan, N.; Kurths, J.
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Multiscale recurrence analysis of spatio-temporal data.
Riedl, M; Marwan, N; Kurths, J
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise
Murakami, Hisashi
2018-01-01
Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074
Sexual networks: measuring sexual selection in structured, polyandrous populations.
McDonald, Grant C; James, Richard; Krause, Jens; Pizzari, Tommaso
2013-03-05
Sexual selection is traditionally measured at the population level, assuming that populations lack structure. However, increasing evidence undermines this approach, indicating that intrasexual competition in natural populations often displays complex patterns of spatial and temporal structure. This complexity is due in part to the degree and mechanisms of polyandry within a population, which can influence the intensity and scale of both pre- and post-copulatory sexual competition. Attempts to measure selection at the local and global scale have been made through multi-level selection approaches. However, definitions of local scale are often based on physical proximity, providing a rather coarse measure of local competition, particularly in polyandrous populations where the local scale of pre- and post-copulatory competition may differ drastically from each other. These limitations can be solved by social network analysis, which allows us to define a unique sexual environment for each member of a population: 'local scale' competition, therefore, becomes an emergent property of a sexual network. Here, we first propose a novel quantitative approach to measure pre- and post-copulatory sexual selection, which integrates multi-level selection with information on local scale competition derived as an emergent property of networks of sexual interactions. We then use simple simulations to illustrate the ways in which polyandry can impact estimates of sexual selection. We show that for intermediate levels of polyandry, the proposed network-based approach provides substantially more accurate measures of sexual selection than the more traditional population-level approach. We argue that the increasing availability of fine-grained behavioural datasets provides exciting new opportunities to develop network approaches to study sexual selection in complex societies.
The Impact of Large, Multi-Function/Multi-Site Competitions
2003-08-01
this approach generates larger savings and improved service quality , and is less expensive to implement. Moreover, it is a way to meet the President s...of the study is to assess the degree to which large-scale competitions completed have resulted in increased savings and service quality and decreased
A scale space feature based registration technique for fusion of satellite imagery
NASA Technical Reports Server (NTRS)
Raghavan, Srini; Cromp, Robert F.; Campbell, William C.
1997-01-01
Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Innovative architecture design for high performance organic and hybrid multi-junction solar cells
NASA Astrophysics Data System (ADS)
Li, Ning; Spyropoulos, George D.; Brabec, Christoph J.
2017-08-01
The multi-junction concept is especially attractive for the photovoltaic (PV) research community owing to its potential to overcome the Schockley-Queisser limit of single-junction solar cells. Tremendous research interests are now focused on the development of high-performance absorbers and novel device architectures for emerging PV technologies, such as organic and perovskite PVs. It has been predicted that the multi-junction concept is able to boost the organic and perovskite PV technologies approaching the 20% and 30% benchmarks, respectively, showing a bright future of commercialization of the emerging PV technologies. In this contribution, we will demonstrate innovative architecture design for solution-processed, highly functional organic and hybrid multi-junction solar cells. A simple but elegant approach to fabricating organic and hybrid multi-junction solar cells will be introduced. By laminating single organic/hybrid solar cells together through an intermediate layer, the manufacturing cost and complexity of large-scale multi-junction solar cells can be significantly reduced. This smart approach to balancing the photocurrents as well as open circuit voltages in multi-junction solar cells will be demonstrated and discussed in detail.
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
A Langevin approach to multi-scale modeling
Hirvijoki, Eero
2018-04-13
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
A Langevin approach to multi-scale modeling
NASA Astrophysics Data System (ADS)
Hirvijoki, Eero
2018-04-01
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this letter, we propose a multi-scale method which allows us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. This allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.
A Langevin approach to multi-scale modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvijoki, Eero
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de
In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...
2017-07-26
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNab, W; Ezzedine, S; Detwiler, R
2007-02-26
Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2017-02-01
Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.
An automated approach for extracting Barrier Island morphology from digital elevation models
NASA Astrophysics Data System (ADS)
Wernette, Phillipe; Houser, Chris; Bishop, Michael P.
2016-06-01
The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.
NASA Astrophysics Data System (ADS)
Villegas, J. C.; Salazar, J. F.; Arias, P. A.; León, J. D.
2017-12-01
Land cover transformation is currently one of the most important challenges in tropical South America. These transformations occur both because of climate-related ecological perturbations, as well as in response to ongoing socio-economic processes. A fundamental difference between those two drivers is the spatial and temporal scale at which they operate. However, when considered in a larger context, both drivers affect the ability of ecosystems to provide fundamental services to society. In this work, we use a multi-scale approach to identify key-mechanisms through which land cover transformation significantly affects ecological, hydrological and ecoclimatological dynamics, potentially leading to loss of societally-critical regulation services. We propose a suite of examples spanning multiple spatial and temporal scales that illustrate the effects of land cover trnasformations in ecological, hydrological, biogeochemical and climatic functions in tropical South America. These examples highlight important global-change-effects management challenges, as well as the need to consider the feedbacks and interactions between multi-scale processes.
Scale invariant texture descriptors for classifying celiac disease
Hegenbart, Sebastian; Uhl, Andreas; Vécsei, Andreas; Wimmer, Georg
2013-01-01
Scale invariant texture recognition methods are applied for the computer assisted diagnosis of celiac disease. In particular, emphasis is given to techniques enhancing the scale invariance of multi-scale and multi-orientation wavelet transforms and methods based on fractal analysis. After fine-tuning to specific properties of our celiac disease imagery database, which consists of endoscopic images of the duodenum, some scale invariant (and often even viewpoint invariant) methods provide classification results improving the current state of the art. However, not each of the investigated scale invariant methods is applicable successfully to our dataset. Therefore, the scale invariance of the employed approaches is explicitly assessed and it is found that many of the analyzed methods are not as scale invariant as they theoretically should be. Results imply that scale invariance is not a key-feature required for successful classification of our celiac disease dataset. PMID:23481171
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
NASA Astrophysics Data System (ADS)
Dündar, Furkan Semih
2018-01-01
We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Modeling small-scale dairy farms in central Mexico using multi-criteria programming.
Val-Arreola, D; Kebreab, E; France, J
2006-05-01
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multi-criteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, ryegrass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Local variance for multi-scale analysis in geomorphometry.
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-07-15
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
NASA Astrophysics Data System (ADS)
Cartier, V.; Claret, C.; Garnier, R.; Fayolle, S.; Franquet, E.
2010-03-01
The complexity of the relationships between environmental factors and organisms can be revealed by sampling designs which consider the contribution to variability of different temporal and spatial scales, compared to total variability. From a management perspective, a multi-scale approach can lead to time-saving. Identifying environmental patterns that help maintain patchy distribution is fundamental in studying coastal lagoons, transition zones between continental and marine waters characterised by great environmental variability on spatial and temporal scales. They often present organic enrichment inducing decreased species richness and increased densities of opportunist species like C hironomus salinarius, a common species that tends to swarm and thus constitutes a nuisance for human populations. This species is dominant in the Bolmon lagoon, a French Mediterranean coastal lagoon under eutrophication. Our objective was to quantify variability due to both spatial and temporal scales and identify the contribution of different environmental factors to this variability. The population of C. salinarius was sampled from June 2007 to June 2008 every two months at 12 sites located in two areas of the Bolmon lagoon, at two different depths, with three sites per area-depth combination. Environmental factors (temperature, dissolved oxygen both in sediment and under water surface, sediment organic matter content and grain size) and microbial activities (i.e. hydrolase activities) were also considered as explanatory factors of chironomid densities and distribution. ANOVA analysis reveals significant spatial differences regarding the distribution of chironomid larvae for the area and the depth scales and their interaction. The spatial effect is also revealed for dissolved oxygen (water), salinity and fine particles (area scale), and for water column depth. All factors but water column depth show a temporal effect. Spearman's correlations highlight the seasonal effect (temperature, dissolved oxygen in sediment and water) as well as the effect of microbial activities on chironomid larvae. Our results show that a multi-scale approach identifies patchy distribution, even when there is relative environmental homogeneity.
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
NASA Astrophysics Data System (ADS)
Ravi, Sathish Kumar; Gawad, Jerzy; Seefeldt, Marc; Van Bael, Albert; Roose, Dirk
2017-10-01
A numerical multi-scale model is being developed to predict the anisotropic macroscopic material response of multi-phase steel. The embedded microstructure is given by a meso-scale Representative Volume Element (RVE), which holds the most relevant features like phase distribution, grain orientation, morphology etc., in sufficient detail to describe the multi-phase behavior of the material. A Finite Element (FE) mesh of the RVE is constructed using statistical information from individual phases such as grain size distribution and ODF. The material response of the RVE is obtained for selected loading/deformation modes through numerical FE simulations in Abaqus. For the elasto-plastic response of the individual grains, single crystal plasticity based plastic potential functions are proposed as Abaqus material definitions. The plastic potential functions are derived using the Facet method for individual phases in the microstructure at the level of single grains. The proposed method is a new modeling framework and the results presented in terms of macroscopic flow curves are based on the building blocks of the approach, while the model would eventually facilitate the construction of an anisotropic yield locus of the underlying multi-phase microstructure derived from a crystal plasticity based framework.
NASA Astrophysics Data System (ADS)
Field, C. B.
2012-12-01
Modeling climate change impacts is challenging for a variety of reasons. Some of these are related to causation. A weather or climate event is rarely the sole cause of an impact, and, for many impacts, social, economic, cultural, or ecological factors may play a larger role than climate. Other challenges are related to outcomes. Consequences of an event are often most severe when several kinds of responses interact, typically in unexpected ways. Many kinds of consequences are difficult to quantify, especially when they include a mix of market, cultural, personal, and ecological values. In addition, scale can be tremendously important. Modest impacts over large areas present very different challenges than severe but very local impacts. Finally, impacts may respond non-linearly to forcing, with behavior that changes qualitatively at one or more thresholds and with unexpected outcomes in extremes. Modeling these potentially complex interactions between drivers and impacts presents one set of challenges. Evaluating the models presents another. At least five kinds of approaches can contribute to the evaluation of impact models designed to provide insights in multi-driver, multi-responder, multi-scale, and extreme-driven contexts, even though none of these approaches is a complete or "silver-bullet" solution. The starting point for much of the evaluation in this space is case studies. Case studies can help illustrate links between processes and scales. They can highlight factors that amplify or suppress sensitivity to climate drivers, and they can suggest the consequences of intervening at different points. While case studies rarely provide concrete evidence about mechanisms, they can help move a mechanistic case from circumstantial to sound. Novel approaches to data collection, including crowd sourcing, can potentially provide tools and the number of relevant examples to develop case studies as statistically robust data sources. A critical condition for progress in this area is the ability to utilize data of uneven quality and standards. Novel approaches to meta-analysis provide other options for taking advantage of diverse case studies. Techniques for summarizing responses across impacts, drivers, and scales can play a huge role in increasing the value of information from case studies. In some cases, expert elicitation may provide alternatives for identifying mechanisms or for interpreting multi-factor drivers or responses. Especially when designed to focus on a well-defined set of observations, a sophisticated elicitation can establish formal confidence limits on responses that are otherwise difficult to constrain. A final possible approach involves a focus on the mechanisms contributing to an impact, rather than the impact itself. Approaches based on quantified mechanisms are especially appealing in the context of models where the number of interactions makes it difficult to intuitively understand the chain of connections from cause to effect, when actors differ in goals or sensitivities, or when scale affects parts of the system differently. With all of these approaches, useful evidence may not conform to traditional levels of statistical confidence. Some of the biggest challenges in taking advantage of the potential tools will involve defining what constitutes a meaningful evaluation.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tian, F.; Hu, H.; Yang, P.
2014-03-01
A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: a photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. To upscale the evapotranspiration from the leaf to plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. To upscale the evapotranspiration from the plant to field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationship between leaf areas and stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling was slightly higher (18%) than that obtained by sap flow. At the field scale, the estimates of transpiration derived from sap flow with upscaling and eddy covariance showed reasonable consistency during the cotton's open-boll growth stage, during which soil evaporation can be neglected. The results indicate that the proposed upscaling approaches are reasonable and valid. Based on the measurements and upscaling approaches, evapotranspiration components were analyzed for a cotton field under mulched drip irrigation. During the two analyzed sub-periods in July and August, evapotranspiration rates were 3.94 and 4.53 m day-1, respectively. The fraction of transpiration to evapotranspiration reached 87.1% before drip irrigation and 82.3% after irrigation. The high fraction of transpiration over evapotranspiration was principally due to the mulched film above the drip pipe, low soil water content in the inter-film zone, well-closed canopy, and high water requirement of the crop.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tian, F.; Hu, H. C.; Hu, H. P.
2013-11-01
A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. To upscale the evapotranspiration from the leaf to the plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. To upscale the evapotranspiration from the plant to the field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationships between leaf area and stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling was slightly higher (18%) than that obtained by sap flow. At the field scale, the estimates of transpiration derived from sap flow with upscaling and eddy covariance shown reasonable consistency during the cotton open boll growth stage when soil evaporation can be neglected. The results indicate that the upscaling approaches are reasonable and valid. Based on the measurements and upscaling approaches, evapotranspiration components were analyzed under mulched drip irrigation. During the two analysis sub-periods in July and August, evapotranspiration rates were 3.94 and 4.53 mm day-1, respectively. The fraction of transpiration to evapotranspiration reached 87.1% before drip irrigation and 82.3% after irrigation. The high fraction of transpiration over evapotranspiration was principally due to the mulched film above drip pipe, low soil water content in the inter-film zone, well-closed canopy, and high water requirement of the crop.
A multi-scale spatial approach to address environmental effects of small hydropower development.
McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C
2015-01-01
Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.
Su, Xianli; Wei, Ping; Li, Han; Liu, Wei; Yan, Yonggao; Li, Peng; Su, Chuqi; Xie, Changjun; Zhao, Wenyu; Zhai, Pengcheng; Zhang, Qingjie; Tang, Xinfeng; Uher, Ctirad
2017-05-01
Considering only about one third of the world's energy consumption is effectively utilized for functional uses, and the remaining is dissipated as waste heat, thermoelectric (TE) materials, which offer a direct and clean thermal-to-electric conversion pathway, have generated a tremendous worldwide interest. The last two decades have witnessed a remarkable development in TE materials. This Review summarizes the efforts devoted to the study of non-equilibrium synthesis of TE materials with multi-scale structures, their transport behavior, and areas of applications. Studies that work towards the ultimate goal of developing highly efficient TE materials possessing multi-scale architectures are highlighted, encompassing the optimization of TE performance via engineering the structures with different dimensional aspects spanning from the atomic and molecular scales, to nanometer sizes, and to the mesoscale. In consideration of the practical applications of high-performance TE materials, the non-equilibrium approaches offer a fast and controllable fabrication of multi-scale microstructures, and their scale up to industrial-size manufacturing is emphasized here. Finally, the design of two integrated power generating TE systems are described-a solar thermoelectric-photovoltaic hybrid system and a vehicle waste heat harvesting system-that represent perhaps the most important applications of thermoelectricity in the energy conversion area. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate
NASA Astrophysics Data System (ADS)
Yang, Jun; Zhu, Shi-Jiao
There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.
EPA RESEARCH HIGHLIGHTS -- MODELS-3/CMAQ OFFERS COMPREHENSIVE APPROACH TO AIR QUALITY MODELING
Regional and global coordinated efforts are needed to address air quality problems that are growing in complexity and scope. Models-3 CMAQ contains a community multi-scale air quality modeling system for simulating urban to regional scale pollution problems relating to troposphe...
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
Voluntary EMG-to-force estimation with a multi-scale physiological muscle model
2013-01-01
Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560
Biointerface dynamics--Multi scale modeling considerations.
Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko
2015-08-01
Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.
2016-01-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
A self-consistent first-principle based approach to model carrier mobility in organic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meded, Velimir; Friederich, Pascal; Symalla, Franz
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less
Confirmatory Factor Analysis of the Hewitt-Multidimensional Perfectionism Scale
ERIC Educational Resources Information Center
Barut, Yasar
2015-01-01
Various studies on the conceptual framework of perfectionism construct use Hewitt Multi-dimensional Perfectionism Scale (HMPS), as a basic approach. The measure has a prominent role with respect to the theoretical considerations of perfectionism dimensions. This study aimed to evaluate the psychometric properties of the Turkish version of the…
The temporal relationships between the measurements of terrestrial water storage (TWS), groundwater, and stream discharge were analyzed at three different scales in the Columbia River Basin (CRB) for water years 2004 - 2012. Our nested watershed approach examined the Snake River ...
Weatherill, John; Krause, Stefan; Voyce, Kevin; Drijfhout, Falko; Levy, Amir; Cassidy, Nigel
2014-03-01
Integrated approaches for the identification of pollutant linkages between aquifers and streams are of crucial importance for evaluating the environmental risks posed by industrial contaminants like trichloroethene (TCE). This study presents a systematic, multi-scale approach to characterising groundwater TCE discharge to a 'gaining' UK lowland stream receiving baseflow from a major Permo-Triassic sandstone aquifer. Beginning with a limited number of initial monitoring points, we aim to provide a 'first pass' mechanistic understanding of the plume's fate at the aquifer/stream interface using a novel combination of streambed diffusion samplers, riparian monitoring wells and drive-point mini-piezometers in a spatially nested sampling configuration. Our results indicate the potential discharge zone of the plume to extend along a stream reach of 120 m in length, delineated by a network of 60 in-situ diffusion samplers. Within this section, a 40 m long sub-reach of higher concentration (>10 μg L(-1)) was identified; centred on a meander bend in the floodplain. 25 multi-level mini-piezometers installed to target this down-scaled reach revealed even higher TCE concentrations (20-40 μg L(-1)), significantly above alluvial groundwater samples (<6 μg L(-1)) from 15 riparian monitoring wells. Significant lateral and vertical spatial heterogeneity in TCE concentrations within the top 1m of the streambed was observed with the decimetre-scale vertical resolution provided by multi-level mini-piezometers. It appears that the distribution of fine-grained material in the Holocene deposits of the riparian floodplain and below the channel is exerting significant local-scale geological controls on the location and magnitude of the TCE discharge. Large-scale in-situ biodegradation of the plume was not evident during the monitoring campaigns. However, detections of cis-1,2-dichloroethene and vinyl chloride in discrete sections of the sediment profile indicate that shallow (e.g., <20 cm) TCE transformation may be significant at a local scale in the streambed deposits. Our findings highlight the need for efficient multi-scale monitoring strategies in geologically heterogeneous lowland stream/aquifer systems in order to more adequately quantify the risk to surface water ecological receptors posed by point-source groundwater contaminants like TCE. Copyright © 2013 Elsevier B.V. All rights reserved.
An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Bradley, C.; Massof, R. W.
2016-11-01
Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.
NASA Astrophysics Data System (ADS)
Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran
A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.
Multi-Hazard Interactions in Guatemala
NASA Astrophysics Data System (ADS)
Gill, Joel; Malamud, Bruce D.
2017-04-01
In this paper, we combine physical and social science approaches to develop a multi-scale regional framework for natural hazard interactions in Guatemala. The identification and characterisation of natural hazard interactions is an important input for comprehensive multi-hazard approaches to disaster risk reduction at a regional level. We use five transdisciplinary evidence sources to organise and populate our framework: (i) internationally-accessible literature; (ii) civil protection bulletins; (iii) field observations; (iv) stakeholder interviews (hazard and civil protection professionals); and (v) stakeholder workshop results. These five evidence sources are synthesised to determine an appropriate natural hazard classification scheme for Guatemala (6 hazard groups, 19 hazard types, and 37 hazard sub-types). For a national spatial extent (Guatemala), we construct and populate a "21×21" hazard interaction matrix, identifying 49 possible interactions between 21 hazard types. For a sub-national spatial extent (Southern Highlands, Guatemala), we construct and populate a "33×33" hazard interaction matrix, identifying 112 possible interactions between 33 hazard sub-types. Evidence sources are also used to constrain anthropogenic processes that could trigger natural hazards in Guatemala, and characterise possible networks of natural hazard interactions (cascades). The outcomes of this approach are among the most comprehensive interaction frameworks for national and sub-national spatial scales in the published literature. These can be used to support disaster risk reduction and civil protection professionals in better understanding natural hazards and potential disasters at a regional scale.
NASA Astrophysics Data System (ADS)
Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.
2015-12-01
Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco
2016-04-01
Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Biofuel feedstock production in the United States (US) is an emergent environmental nutrient management issue, whose exploration can benefit from a multi-scale and multimedia systems modeling approach that explicitly addresses diverging stakeholder interests. In the present anal...
NASA Technical Reports Server (NTRS)
Brown, Christopher A.
1993-01-01
The approach of the project is to base the design of multi-function, reflective topographies on the theory that topographically dependent phenomena react with surfaces and interfaces at certain scales. The first phase of the project emphasizes the development of methods for understanding the sizes of topographic features which influence reflectivity. Subsequent phases, if necessary, will address the scales of interaction for adhesion and manufacturing processes. A simulation of the interaction of electromagnetic radiation, or light, with a reflective surface is performed using specialized software. Reflectivity of the surface as a function of scale is evaluated and the results from the simulation are compared with reflectivity measurements made on multi-function, reflective surfaces.
Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B.
2003-12-01
Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.
Frasch, Martin G; Lobmaier, Silvia M; Stampalija, Tamara; Desplats, Paula; Pallarés, María Eugenia; Pastor, Verónica; Brocco, Marcela A; Wu, Hau-Tieng; Schulkin, Jay; Herry, Christophe L; Seely, Andrew J E; Metz, Gerlinde A S; Louzoun, Yoram; Antonelli, Marta C
2018-05-30
Prenatal stress (PS) impacts early postnatal behavioural and cognitive development. This process of 'fetal programming' is mediated by the effects of the prenatal experience on the developing hypothalamic-pituitary-adrenal (HPA) axis and autonomic nervous system (ANS). We derive a multi-scale multi-species approach to devising preclinical and clinical studies to identify early non-invasively available pre- and postnatal biomarkers of PS. The multiple scales include brain epigenome, metabolome, microbiome and the ANS activity gauged via an array of advanced non-invasively obtainable properties of fetal heart rate fluctuations. The proposed framework has the potential to reveal mechanistic links between maternal stress during pregnancy and changes across these physiological scales. Such biomarkers may hence be useful as early and non-invasive predictors of neurodevelopmental trajectories influenced by the PS as well as follow-up indicators of success of therapeutic interventions to correct such altered neurodevelopmental trajectories. PS studies must be conducted on multiple scales derived from concerted observations in multiple animal models and human cohorts performed in an interactive and iterative manner and deploying machine learning for data synthesis, identification and validation of the best non-invasive detection and follow-up biomarkers, a prerequisite for designing effective therapeutic interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis
2017-09-22
In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... appropriate prevention and treatment approaches to methamphetamine abuse and suicide in a community driven... implemented and executed via an integrated and comprehensive approach through collaborations across multi... experience providing education and outreach on a national scale. This limitation ensures that the awardee...
NASA Astrophysics Data System (ADS)
Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana
2018-01-01
This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
Spatial vision processes: From the optical image to the symbolic structures of contour information
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1988-01-01
The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.
Quintero, Juliana; García-Betancourt, Tatiana; Caprara, Andrea; Basso, Cesar; Garcia da Rosa, Elsa; Manrique-Saide, Pablo; Coelho, Giovanini; Sánchez-Tejeda, Gustavo; Dzul-Manzanilla, Felipe; García, Diego Alejandro; Carrasquilla, Gabriel; Alfonso-Sierra, Eduardo; Monteiro Vasconcelos Motta, Cyntia; Sommerfeld, Johannes; Kroeger, Axel
2017-09-01
Prior to the current public health emergency following the emergence of chikungunya and Zika Virus Disease in the Americas during 2014 and 2015, multi-country research investigated between 2011 and 2013 the efficacy of novel Aedes aegypti intervention packages through cluster randomised controlled trials in four Latin-American cities: Fortaleza (Brazil); Girardot (Colombia), Acapulco (Mexico) and Salto (Uruguay). Results from the trials led to a scaling up effort of the interventions at city levels. Scaling up refers to deliberate efforts to increase the impact of successfully tested health interventions to benefit more people and foster policy and program development in a sustainable way. The different scenarios represent examples for a 'vertical approach' and a 'horizontal approach'. This paper presents the analysis of a preliminary process evaluation of the scaling up efforts in the mentioned cites, with a focus on challenges and enabling factors encountered by the research teams, analysing the main social, political, administrative, financial and acceptance factors.
Highly efficient spatial data filtering in parallel using the opensource library CPPPO
NASA Astrophysics Data System (ADS)
Municchi, Federico; Goniva, Christoph; Radl, Stefan
2016-10-01
CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles.
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
NASA Astrophysics Data System (ADS)
Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander
2017-11-01
Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.
Kirsch, Joseph; Peterson, James T.
2014-01-01
There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
Schuster, Richard; Römer, Heinrich; Germain, Ryan R
2013-01-01
Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal populations, and illustrate how multi-scale data can be used to identify preferred crossing sites for different species within a mammal community.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tian, F.; Hu, H.
2013-12-01
A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. For upscaling the evapotranspiration from the leaf to the plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. For upscaling the evapotranspiration from the plant to the field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationships between the leaf area and the stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling is slightly higher (18%) than that obtained by sap flow. At the field scale, the estimate of the transpiration obtained by upscaling the estimate based on sap flow measurements is also systematically higher (10%) compared to that obtained through eddy covariance during the cotton open boll growth stage when soil evaporation can be neglected. Nevertheless, the results derived from these three distinct methods show reasonable consistency at the field scale, which indicates that the upscaling approaches are reasonable and valid. Based on the measurements and the upscaling approaches, the evapotranspiration components were analyzed under mulched drip irrigation. During the cotton flower and bolling stages in July and August, the evapotranspiration are 3.94 and 4.53 mm day-1, respectively. The proportion of transpiration to evapotranspiration reaches 87.1% before drip irrigation and 82.3% after irrigation. The high water use efficiency is principally due to the mulched film above the drip pipe, the low soil water content in the inter-film zone,the well-closed canopy, and the high water requirement of the crop
Ashraf, Sania; Moore, Carolyn; Gupta, Vaibhav; Chowdhury, Anir; Azad, Abul K; Singh, Neelu; Hagan, David; Labrique, Alain B
2015-12-09
National level policymaking and implementation includes multiple stakeholders with varied interests and priorities. Multi-stakeholder dialogues (MSDs) can facilitate consensus building through collective identification of challenges, recognition of shared goals and interests, and creation of solution pathways. This can shape joint planning and implementation for long-term efficiency in health and other sectors. Scaling up the effective use of information and communication technologies (ICTs) requires cohesive strategic planning towards a shared goal. In Bangladesh, the government and partners convened an MSD in March 2015 to increase stakeholder engagement in policymaking and implementation of a national ICT or electronic or mobile health (eHealth or mHealth) strategy, which seeks to incorporate ICTs into the national health system, aligning with the Digital Bangladesh Vision 2021. Relevant stakeholders were identified and key priorities and challenges were mapped through key informant interviews. An MSD was conducted with key stakeholders in Dhaka, Bangladesh. The MSD included presentations, group option generation, agreement and prioritization of barriers to scaling up ICTs. The MSD approach to building consensus on key priorities highlights the value of dialogue and collaboration with relevant stakeholders to encourage country ownership of nationwide efforts such as ICT scale-up. This MSD showed the dynamic context in which stakeholders operate, including those from academia, donors and foundations, healthcare professionals, associations, multilateral organizations, non-governmental organizations, partner countries and the private sector. Through this MSD, participants improved understanding of each other's contributions and interests, identified existing relationships, and agreed on policy and implementation gaps that needed to be filled. Collaboration among stakeholders in ICT efforts and research can promote a cohesive approach to scaling up, as well as improve policymaking by integrating interests and feedback of different key cross sectoral actors. MSDs can align stakeholders to identify challenges and solution pathways, and lead to coordinated action and accountability for resources and results. In addition, the MSD template and approach has been useful to guide ICT scale up in Bangladesh and could be replicated in other contexts to facilitate multi-constituency, multi-sector collaboration.
Kebede, Abiy S; Nicholls, Robert J; Allan, Andrew; Arto, Iñaki; Cazcarro, Ignacio; Fernandes, Jose A; Hill, Chris T; Hutton, Craig W; Kay, Susan; Lázár, Attila N; Macadam, Ian; Palmer, Matthew; Suckall, Natalie; Tompkins, Emma L; Vincent, Katharine; Whitehead, Paul W
2018-09-01
To better anticipate potential impacts of climate change, diverse information about the future is required, including climate, society and economy, and adaptation and mitigation. To address this need, a global RCP (Representative Concentration Pathways), SSP (Shared Socio-economic Pathways), and SPA (Shared climate Policy Assumptions) (RCP-SSP-SPA) scenario framework has been developed by the Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5). Application of this full global framework at sub-national scales introduces two key challenges: added complexity in capturing the multiple dimensions of change, and issues of scale. Perhaps for this reason, there are few such applications of this new framework. Here, we present an integrated multi-scale hybrid scenario approach that combines both expert-based and participatory methods. The framework has been developed and applied within the DECCMA 1 project with the purpose of exploring migration and adaptation in three deltas across West Africa and South Asia: (i) the Volta delta (Ghana), (ii) the Mahanadi delta (India), and (iii) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India). Using a climate scenario that encompasses a wide range of impacts (RCP8.5) combined with three SSP-based socio-economic scenarios (SSP2, SSP3, SSP5), we generate highly divergent and challenging scenario contexts across multiple scales against which robustness of the human and natural systems within the deltas are tested. In addition, we consider four distinct adaptation policy trajectories: Minimum intervention, Economic capacity expansion, System efficiency enhancement, and System restructuring, which describe alternative future bundles of adaptation actions/measures under different socio-economic trajectories. The paper highlights the importance of multi-scale (combined top-down and bottom-up) and participatory (joint expert-stakeholder) scenario methods for addressing uncertainty in adaptation decision-making. The framework facilitates improved integrated assessments of the potential impacts and plausible adaptation policy choices (including migration) under uncertain future changing conditions. The concept, methods, and processes presented are transferable to other sub-national socio-ecological settings with multi-scale challenges. Copyright © 2018. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kant, Deepender, E-mail: dkc@ceeri.ernet.in; Joshi, L. M.; Janyani, Vijay
The klystron is a well-known microwave amplifier which uses kinetic energy of an electron beam for amplification of the RF signal. There are some limitations of conventional single beam klystron such as high operating voltage, low efficiency and bulky size at higher power levels, which are very effectively handled in Multi Beam Klystron (MBK) that uses multiple low purveyance electron beams for RF interaction. Each beam propagates along its individual transit path through a resonant cavity structure. Multi-Beam klystron cavity design is a critical task due to asymmetric cavity structure and can be simulated by 3D code only. The presentmore » paper shall discuss the design of multi beam RF cavities for klystrons operating at 2856 MHz (S-band) and 5 GHz (C-band) respectively. The design approach uses some scaling laws for finding the electron beam parameters of the multi beam device from their single beam counter parts. The scaled beam parameters are then used for finding the design parameters of the multi beam cavities. Design of the desired multi beam cavity can be optimized through iterative simulations in CST Microwave Studio.« less
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2013-01-01
In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the previous…
Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.
2014-01-01
Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807
Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick
2016-10-01
Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Wang, Guanxi; Tie, Yun; Qi, Lin
2017-07-01
In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.
Anti-arrhythmic strategies for atrial fibrillation
Grandi, Eleonora; Maleckar, Mary M.
2016-01-01
Atrial fibrillation (AF), the most common cardiac arrhythmia, is associated with increased risk of cerebrovascular stroke, and with several other pathologies, including heart failure. Current therapies for AF are targeted at reducing risk of stroke (anticoagulation) and tachycardia-induced cardiomyopathy (rate or rhythm control). Rate control, typically achieved by atrioventricular nodal blocking drugs, is often insufficient to alleviate symptoms. Rhythm control approaches include antiarrhythmic drugs, electrical cardioversion, and ablation strategies. Here, we offer several examples of how computational modeling can provide a quantitative framework for integrating multi scale data to: (a) gain insight into multi-scale mechanisms of AF; (b) identify and test pharmacological and electrical therapy and interventions; and (c) support clinical decisions. We review how modeling approaches have evolved and contributed to the research pipeline and preclinical development and discuss future directions and challenges in the field. PMID:27612549
Integrating Climate Projections into Multi-Level City Planning: A Texas Case Study
NASA Astrophysics Data System (ADS)
Hayhoe, K.; Gelca, R.; Baumer, Z.; Gold, G.
2016-12-01
Climate change impacts on energy and water are a serious concern for many cities across the United States. Regional projections from the National Assessment process, or state-specific efforts as in California and Delaware, are typically used to quantify impacts at the regional scale. However, these are often insufficient to provide information at the scale of decision-making for an individual city. Here, we describe a multi-level approach to developing and integrating usable climate information into planning, using a case study from the City of Austin in Texas, a state where few official climate resources are available. Spearheaded by the Office of Sustainability in collaboration with Austin Water, the first step was to characterize observed trends and future projections of how global climate change might affect Austin's current climate. The City then assembled a team of city experts, consulting engineers, and climate scientists to develop a methodology to assess impacts on regional hydrology as part of its Integrated Water Resource Plan, Austin's 100-year water supply and demand planning effort, an effort which included calculating a range of climate indicators and developing and evaluating a new approach to generating climate inputs - including daily streamflow and evaporation - for existing water availability models. This approach, which brings together a range of public, private, and academic experts to support a stakeholder-initiated planning effort, provides concrete insights into the critical importance of multi-level, long-term engagement for development and application of actionable climate science at the local to regional scale.
NASA Astrophysics Data System (ADS)
Kamiran, N.; Sarker, M. L. R.
2014-02-01
The land use/land cover transformation in Malaysia is enormous due to palm oil plantation which has provided huge economical benefits but also created a huge concern for carbon emission and biodiversity. Accurate information about oil palm plantation and the age of plantation is important for a sustainable production, estimation of carbon storage capacity, biodiversity and the climate model. However, the problem is that this information cannot be extracted easily due to the spectral signature for forest and age group of palm oil plantations is similar. Therefore, a noble approach "multi-scale and multi-texture algorithms" was used for mapping vegetation and different age groups of palm oil plantation using a high resolution panchromatic image (WorldView-1) considering the fact that pan imagery has a potential for more detailed and accurate mapping with an effective image processing technique. Seven texture algorithms of second-order Grey Level Co-occurrence Matrix (GLCM) with different scales (from 3×3 to 39×39) were used for texture generation. All texture parameters were classified step by step using a robust classifier "Artificial Neural Network (ANN)". Results indicate that single spectral band was unable to provide good result (overall accuracy = 34.92%), while higher overall classification accuracies (73.48%, 84.76% and 93.18%) were obtained when textural information from multi-scale and multi-texture approach were used in the classification algorithm.
Multi-Scale Distributed Representation for Deep Learning and its Application to b-Jet Tagging
NASA Astrophysics Data System (ADS)
Lee, Jason Sang Hun; Park, Inkyu; Park, Sangnam
2018-06-01
Recently machine learning algorithms based on deep layered artificial neural networks (DNNs) have been applied to a wide variety of high energy physics problems such as jet tagging or event classification. We explore a simple but effective preprocessing step which transforms each realvalued observational quantity or input feature into a binary number with a fixed number of digits. Each binary digit represents the quantity or magnitude in different scales. We have shown that this approach improves the performance of DNNs significantly for some specific tasks without any further complication in feature engineering. We apply this multi-scale distributed binary representation to deep learning on b-jet tagging using daughter particles' momenta and vertex information.
An uneven-aged management strategy: lessons learned
Mark T. Smith; John D. Exline
2002-01-01
Use of an ecosystem approach at a landscape scale to program and guide accomplishments of multi-resource and social objectives has been discussed between researchers and natural resource managers for many years. Presently, great interest exists in the applicability of uneven-aged management practices for such an approach in conifer forests of the Sierra Nevada of...
We propose multi-faceted research to enhance our understanding of NH3 emissions from livestock feeding operations. A process-based emissions modeling approach will be used, and we will investigate ammonia emissions from the scale of the individual farm out to impacts on region...
USDA-ARS?s Scientific Manuscript database
A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...
NASA Astrophysics Data System (ADS)
El-Etriby, Ahmed E.; Abdel-Meguid, Mohamed E.; Hatem, Tarek M.; Bahei-El-Din, Yehia A.
2014-03-01
Ambient vibrations are major source of wasted energy, exploiting properly such vibration can be converted to valuable energy and harvested to power up devices, i.e. electronic devices. Accordingly, energy harvesting using smart structures with active piezoelectric ceramics has gained wide interest over the past few years as a method for converting such wasted energy. This paper provides numerical and experimental analysis of piezoelectric fiber based composites for energy harvesting applications proposing a multi-scale modeling approach coupled with experimental verification. The multi-scale approach suggested to predict the behavior of piezoelectric fiber-based composites use micromechanical model based on Transformation Field Analysis (TFA) to calculate the overall material properties of electrically active composite structure. Capitalizing on the calculated properties, single-phase analysis of a homogeneous structure is conducted using finite element method. The experimental work approach involves running dynamic tests on piezoelectric fiber-based composites to simulate mechanical vibrations experienced by a subway train floor tiles. Experimental results agree well with the numerical results both for static and dynamic tests.
NASA Astrophysics Data System (ADS)
Newig, Jens; Schulz, Daniel; Jager, Nicolas W.
2016-12-01
This article attempts to shed new light on prevailing puzzles of spatial scales in multi-level, participatory governance as regards the democratic legitimacy and environmental effectiveness of governance systems. We focus on the governance re-scaling by the European Water Framework Directive, which introduced new governance scales (mandated river basin management) and demands consultation of citizens and encourages `active involvement' of stakeholders. This allows to examine whether and how re-scaling through deliberate governance interventions impacts on democratic legitimacy and effective environmental policy delivery. To guide the enquiry, this article organizes existing—partly contradictory—claims on the relation of scale, democratic legitimacy, and environmental effectiveness into three clusters of mechanisms, integrating insights from multi-level governance, social-ecological systems, and public participation. We empirically examine Water Framework Directive implementation in a comparative case study of multi-level systems in the light of the suggested mechanisms. We compare two planning areas in Germany: North Rhine Westphalia and Lower Saxony. Findings suggest that the Water Framework Directive did have some impact on institutionalizing hydrological scales and participation. Local participation appears generally both more effective and legitimate than on higher levels, pointing to the need for yet more tailored multi-level governance approaches, depending on whether environmental knowledge or advocacy is sought. We find mixed results regarding the potential of participation to bridge spatial `misfits' between ecological and administrative scales of governance, depending on the historical institutionalization of governance on ecological scales. Polycentricity, finally, appeared somewhat favorable in effectiveness terms with some distinct differences regarding polycentricity in planning vs. polycentricity in implementation.
Newig, Jens; Schulz, Daniel; Jager, Nicolas W
2016-12-01
This article attempts to shed new light on prevailing puzzles of spatial scales in multi-level, participatory governance as regards the democratic legitimacy and environmental effectiveness of governance systems. We focus on the governance re-scaling by the European Water Framework Directive, which introduced new governance scales (mandated river basin management) and demands consultation of citizens and encourages 'active involvement' of stakeholders. This allows to examine whether and how re-scaling through deliberate governance interventions impacts on democratic legitimacy and effective environmental policy delivery. To guide the enquiry, this article organizes existing-partly contradictory-claims on the relation of scale, democratic legitimacy, and environmental effectiveness into three clusters of mechanisms, integrating insights from multi-level governance, social-ecological systems, and public participation. We empirically examine Water Framework Directive implementation in a comparative case study of multi-level systems in the light of the suggested mechanisms. We compare two planning areas in Germany: North Rhine Westphalia and Lower Saxony. Findings suggest that the Water Framework Directive did have some impact on institutionalizing hydrological scales and participation. Local participation appears generally both more effective and legitimate than on higher levels, pointing to the need for yet more tailored multi-level governance approaches, depending on whether environmental knowledge or advocacy is sought. We find mixed results regarding the potential of participation to bridge spatial 'misfits' between ecological and administrative scales of governance, depending on the historical institutionalization of governance on ecological scales. Polycentricity, finally, appeared somewhat favorable in effectiveness terms with some distinct differences regarding polycentricity in planning vs. polycentricity in implementation.
Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coon, Ethan; Porter, Mark L.; Kang, Qinjun
2012-06-18
In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less
Linking the Grain Scale to Experimental Measurements and Other Scales
NASA Astrophysics Data System (ADS)
Vogler, Tracy
2017-06-01
A number of physical processes occur at the scale of grains that can have a profound influence on the behavior of materials under shock loading. Examples include inelastic deformation, pore collapse, fracture, friction, and internal wave reflections. In some cases such as the initiation of energetics and brittle fracture, these processes can have first order effects on the behavior of materials: the emergent behavior from the grain scale is the dominant one. In other cases, many aspects of the bulk behavior can be described by a continuum description, but some details of the behavior are missed by continuum descriptions. The multi-scale model paradigm envisions flow of information from smaller scales (atomic, dislocation, etc.) to the grain or mesoscale and the up to the continuum scale. A significant challenge in this approach is the need to validate each step. For the grain scale, diagnosing behavior is challenging because of the small spatial and temporal scales involved. Spatially resolved diagnostics have begun to shed light on these processes, and, more recently, advanced light sources have started to be used to probe behavior at the grain scale. In this talk, I will discuss some interesting phenomena that occur at the grain scale in shock loading, experimental approaches to probe the grain scale, and efforts to link the grain scale to smaller and larger scales. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.
A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.
NASA Astrophysics Data System (ADS)
Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio
2016-04-01
The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different stages of the multi-risk assessment process (i.e. identification of objectives, collection of data, definition of risk thresholds and indicators). The results of the assessment will allow the development of multi-risk scenarios enabling the evaluation and prioritization of risk management and adaptation options under changing climate conditions.
ERIC Educational Resources Information Center
Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.
2010-01-01
This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…
A multi-scalar approach to theorizing socio-ecological dynamics of urban residential landscapes
Rinku Roy Chowdhury; Kelli Larson; Morgan Grove; Colin Polsky; Elizabeth Cook; Jeffrey Onsted; Laura Ogden
2011-01-01
Urban residential expansion increasingly drives land use, land cover and ecological changes worldwide, yet social science theories explaining such change remain under-developed. Existing theories often focus on processes occurring at one scale, while ignoring other scales. Emerging evidence from four linked U.S. research sites suggests it is essential to examine...
A multi-scale method of mapping urban influence
Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters
2009-01-01
Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...
Geomorphic analysis of large alluvial rivers
NASA Astrophysics Data System (ADS)
Thorne, Colin R.
2002-05-01
Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.
Large area sub-micron chemical imaging of magnesium in sea urchin teeth.
Masic, Admir; Weaver, James C
2015-03-01
The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Rogers, Donna R B; Ei, Sue; Rogers, Kim R; Cross, Chad L
2007-05-01
This pilot study examines the use of guided visualizations that incorporate both cognitive and behavioral techniques with vibroacoustic therapy and cranial electrotherapy stimulation to form a multi-component therapeutic approach. This multi-component approach to cognitive-behavioral therapy (CBT) was used to treat patients presenting with a range of symptoms including anxiety, depression, and relationship difficulties. Clients completed a pre- and post-session symptom severity scale and CBT skills practice survey. The program consisted of 16 guided visualizations incorporating CBT techniques that were accompanied by vibroacoustic therapy and cranial electrotherapy stimulation. Significant reduction in symptom severity was observed in pre- and post-session scores for anxiety symptoms, relationship difficulties, and depressive symptoms. The majority of the clients (88%) reported use of CBT techniques learned in the guided visualizations at least once per week outside of the sessions.
Cullis, B R; Smith, A B; Beeck, C P; Cowling, W A
2010-11-01
Exploring and exploiting variety by environment (V × E) interaction is one of the major challenges facing plant breeders. In paper I of this series, we presented an approach to modelling V × E interaction in the analysis of complex multi-environment trials using factor analytic models. In this paper, we develop a range of statistical tools which explore V × E interaction in this context. These tools include graphical displays such as heat-maps of genetic correlation matrices as well as so-called E-scaled uniplots that are a more informative alternative to the classical biplot for large plant breeding multi-environment trials. We also present a new approach to prediction for multi-environment trials that include pedigree information. This approach allows meaningful selection indices to be formed either for potential new varieties or potential parents.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
A statistical characterization of the Galileo-to-GPS inter-system bias
NASA Astrophysics Data System (ADS)
Gioia, Ciro; Borio, Daniele
2016-11-01
Global navigation satellite system operates using independent time scales and thus inter-system time offsets have to be determined to enable multi-constellation navigation solutions. GPS/Galileo inter-system bias and drift are evaluated here using different types of receivers: two mass market and two professional receivers. Moreover, three different approaches are considered for the inter-system bias determination: in the first one, the broadcast Galileo to GPS time offset is used to align GPS and Galileo time scales. In the second, the inter-system bias is included in the multi-constellation navigation solution and is estimated using the measurements available. Finally, an enhanced algorithm using constraints on the inter-system bias time evolution is proposed. The inter-system bias estimates obtained with the different approaches are analysed and their stability is experimentally evaluated using the Allan deviation. The impact of the inter-system bias on the position velocity time solution is also considered and the performance of the approaches analysed is evaluated in terms of standard deviation and mean errors for both horizontal and vertical components. From the experiments, it emerges that the inter-system bias is very stable and that the use of constraints, modelling the GPS/Galileo inter-system bias behaviour, significantly improves the performance of multi-constellation navigation.
Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation
NASA Astrophysics Data System (ADS)
Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.
2017-09-01
Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.
Schut, Marc; Hermans, Frans; van Asten, Piet; Leeuwis, Cees
2018-01-01
Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs–local-level actors–left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts. PMID:29870559
Sartas, Murat; Schut, Marc; Hermans, Frans; Asten, Piet van; Leeuwis, Cees
2018-01-01
Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs-local-level actors-left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts.
Multi-scale modeling of spin transport in organic semiconductors
NASA Astrophysics Data System (ADS)
Hemmatiyan, Shayan; Souza, Amaury; Kordt, Pascal; McNellis, Erik; Andrienko, Denis; Sinova, Jairo
In this work, we present our theoretical framework to simulate simultaneously spin and charge transport in amorphous organic semiconductors. By combining several techniques e.g. molecular dynamics, density functional theory and kinetic Monte Carlo, we are be able to study spin transport in the presence of anisotropy, thermal effects, magnetic and electric field effects in a realistic morphologies of amorphous organic systems. We apply our multi-scale approach to investigate the spin transport in amorphous Alq3 (Tris(8-hydroxyquinolinato)aluminum) and address the underlying spin relaxation mechanism in this system as a function of temperature, bias voltage, magnetic field and sample thickness.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
Forest height Mapping using the fusion of Lidar and MULTI-ANGLE spectral data
NASA Astrophysics Data System (ADS)
Pang, Y.; Li, Z.
2016-12-01
Characterizing the complexity of forest ecosystem over large area is highly complex. Light detection and Ranging (LIDAR) approaches have demonstrated a high capacity to accurately estimate forest structural parameters. A number of satellite mission concepts have been proposed to fuse LiDAR with other optical imagery allowing Multi-angle spectral observations to be captured using the Bidirectional Reflectance Distribution Function (BRDF) characteristics of forests. China is developing the concept of Chinese Terrestrial Carbon Mapping Satellite. A multi-beam waveform Lidar is the main sensor. A multi-angle imagery system is considered as the spatial mapping sensor. In this study, we explore the fusion potential of Lidar and multi-angle spectral data to estimate forest height across different scales. We flew intensive airborne Lidar and Multi-angle hyperspectral data in Genhe Forest Ecological Research Station, Northeast China. Then extended the spatial scale with some long transect flights to cover more forest structures. Forest height data derived from airborne lidar data was used as reference data and the multi-angle hyperspectral data was used as model inputs. Our results demonstrate that the multi-angle spectral data can be used to estimate forest height with the RMSE of 1.1 m with an R2 approximately 0.8.
Multi-Scale Approach for Predicting Fish Species Distributions across Coral Reef Seascapes
Pittman, Simon J.; Brown, Kerry A.
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5–300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided ‘outstanding’ model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided ‘outstanding’ model predictions for two of five species, with the remaining three models considered ‘excellent’ (AUC = 0.8–0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management. PMID:21637787
Multi-scale approach for predicting fish species distributions across coral reef seascapes.
Pittman, Simon J; Brown, Kerry A
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, N. J.
The purpose of the project is to conduct research at an Integrated Field-Scale Research Challenge Site in the Hanford Site 300 Area, CERCLA OU 300-FF-5 (Figure 1), to investigate multi-scale mass transfer processes associated with a subsurface uranium plume impacting both the vadose zone and groundwater. The project will investigate a series of science questions posed for research related to the effect of spatial heterogeneities, the importance of scale, coupled interactions between biogeochemical, hydrologic, and mass transfer processes, and measurements/approaches needed to characterize a mass-transfer dominated system. The research will be conducted by evaluating three (3) different hypotheses focused onmore » multi-scale mass transfer processes in the vadose zone and groundwater, their influence on field-scale U(VI) biogeochemistry and transport, and their implications to natural systems and remediation. The project also includes goals to 1) provide relevant materials and field experimental opportunities for other ERSD researchers and 2) generate a lasting, accessible, and high-quality field experimental database that can be used by the scientific community for testing and validation of new conceptual and numerical models of subsurface reactive transport.« less
NASA Astrophysics Data System (ADS)
Huo, Chengyu; Huang, Xiaolin; Zhuang, Jianjun; Hou, Fengzhen; Ni, Huangjing; Ning, Xinbao
2013-09-01
The Poincaré plot is one of the most important approaches in human cardiac rhythm analysis. However, further investigations are still needed to concentrate on techniques that can characterize the dispersion of the points displayed by a Poincaré plot. Based on a modified Poincaré plot, we provide a novel measurement named distribution entropy (DE) and propose a quadrantal multi-scale distribution entropy analysis (QMDE) for the quantitative descriptions of the scatter distribution patterns in various regions and temporal scales. We apply this method to the heartbeat interval series derived from healthy subjects and congestive heart failure (CHF) sufferers, respectively, and find that the discriminations between them are most significant in the first quadrant, which implies significant impacts on vagal regulation brought about by CHF. We also investigate the day-night differences of young healthy people, and it is shown that the results present a clearly circadian rhythm, especially in the first quadrant. In addition, the multi-scale analysis indicates that the results of healthy subjects and CHF sufferers fluctuate in different trends with variation of the scale factor. The same phenomenon also appears in circadian rhythm investigations of young healthy subjects, which implies that the cardiac dynamic system is affected differently in various temporal scales by physiological or pathological factors.
Discontinuities, cross-scale patterns, and the organization of ecosystems
Nash, Kirsty L.; Allen, Craig R.; Angeler, David G.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Knutson, Melinda; Nelson, R. John; Nystrom, Magnus; Stow, Craig A.; Sandstrom, Shana M.
2014-01-01
Ecological structures and processes occur at specific spatiotemporal scales, and interactions that occur across multiple scales mediate scale-specific (e.g., individual, community, local, or regional) responses to disturbance. Despite the importance of scale, explicitly incorporating a multi-scale perspective into research and management actions remains a challenge. The discontinuity hypothesis provides a fertile avenue for addressing this problem by linking measureable proxies to inherent scales of structure within ecosystems. Here we outline the conceptual framework underlying discontinuities and review the evidence supporting the discontinuity hypothesis in ecological systems. Next we explore the utility of this approach for understanding cross-scale patterns and the organization of ecosystems by describing recent advances for examining nonlinear responses to disturbance and phenomena such as extinctions, invasions, and resilience. To stimulate new research, we present methods for performing discontinuity analysis, detail outstanding knowledge gaps, and discuss potential approaches for addressing these gaps.
Multi-site precipitation downscaling using a stochastic weather generator
NASA Astrophysics Data System (ADS)
Chen, Jie; Chen, Hua; Guo, Shenglian
2018-03-01
Statistical downscaling is an efficient way to solve the spatiotemporal mismatch between climate model outputs and the data requirements of hydrological models. However, the most commonly-used downscaling method only produces climate change scenarios for a specific site or watershed average, which is unable to drive distributed hydrological models to study the spatial variability of climate change impacts. By coupling a single-site downscaling method and a multi-site weather generator, this study proposes a multi-site downscaling approach for hydrological climate change impact studies. Multi-site downscaling is done in two stages. The first stage involves spatially downscaling climate model-simulated monthly precipitation from grid scale to a specific site using a quantile mapping method, and the second stage involves the temporal disaggregating of monthly precipitation to daily values by adjusting the parameters of a multi-site weather generator. The inter-station correlation is specifically considered using a distribution-free approach along with an iterative algorithm. The performance of the downscaling approach is illustrated using a 10-station watershed as an example. The precipitation time series derived from the National Centers for Environment Prediction (NCEP) reanalysis dataset is used as the climate model simulation. The precipitation time series of each station is divided into 30 odd years for calibration and 29 even years for validation. Several metrics, including the frequencies of wet and dry spells and statistics of the daily, monthly and annual precipitation are used as criteria to evaluate the multi-site downscaling approach. The results show that the frequencies of wet and dry spells are well reproduced for all stations. In addition, the multi-site downscaling approach performs well with respect to reproducing precipitation statistics, especially at monthly and annual timescales. The remaining biases mainly result from the non-stationarity of NCEP precipitation. Overall, the proposed approach is efficient for generating multi-site climate change scenarios that can be used to investigate the spatial variability of climate change impacts on hydrology.
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
NASA Astrophysics Data System (ADS)
Romine, William L.; Schaffer, Dane L.; Barrow, Lloyd
2015-11-01
We describe the development and validation of a three-tiered diagnostic test of the water cycle (DTWC) and use it to evaluate the impact of prior learning experiences on undergraduates' misconceptions. While most approaches to instrument validation take a positivist perspective using singular criteria such as reliability and fit with a measurement model, we extend this to a multi-tiered approach which supports multiple interpretations. Using a sample of 130 undergraduate students from two colleges, we utilize the Rasch model to place students and items along traditional one-, two-, and three-tiered scales as well as a misconceptions scale. In the three-tiered and misconceptions scales, high confidence was indicative of mastery. In the latter scale, a 'misconception' was defined as mastery of an incorrect concept. We found that integrating confidence into mastery did little to change item functioning; however, three-tiered usage resulted in higher reliability and lower student ability estimates than two-tiered usage. The misconceptions scale showed high efficacy in predicting items on which particular students were likely to express misconceptions, and revealed several tenacious misconceptions that all students were likely to express regardless of ability. Previous coursework on the water cycle did little to change the prevalence of undergraduates' misconceptions.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less
Improving the distinguishable cluster results: spin-component scaling
NASA Astrophysics Data System (ADS)
Kats, Daniel
2018-06-01
The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.
NASA Astrophysics Data System (ADS)
Datta, Sujit Sankar
2015-11-01
Filtering water and brewing coffee are familiar examples of forcing a fluid through a porous material. Such flows are also crucial to many technological applications, including oil recovery, groundwater remediation, waste CO2 sequestration, and even transporting nutrients through mammalian tissues. I will present an experimental approach by which we directly visualize flow within a disordered 3D porous medium over a broad range of length scales, from the scale of individual pores to that of the entire medium. I will describe how we use this approach to learn about fluctuations and instabilities in single-phase and multi-phase flows.
NASA Astrophysics Data System (ADS)
Formosa, F.; Fréchette, L. G.
2015-12-01
An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).
Module Degradation Mechanisms Studied by a Multi-Scale Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter
2016-11-21
A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.
Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe
2016-03-01
Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach competes successfully with those developed using more expensive imagery, such as multi-spectral and hyperspectral airborne imagery. The high overall accuracy results obtained by the classification of the diseased alders open the door to applications dedicated to monitoring of the health conditions of riparian forest. Our methodological framework will allow UAS users to manage large imagery metric datasets derived from those dense time series.
NASA Astrophysics Data System (ADS)
Lim, D. S.; Brady, A. L.; Cardman, Z.; Cowie, B. R.; Forrest, A.; Marinova, M.; Shepard, R.; Laval, B.; Slater, G. F.; Gernhardt, M.; Andersen, D. T.; Hawes, I.; Sumner, D. Y.; Trembanis, A. C.; McKay, C. P.
2009-12-01
Microbialites can be metre-scale or larger discrete structures that cover kilometre-scale regions, for example in Pavilion Lake, British Columbia, Canada, while the organisms associated with their growth and development are much smaller (less than millimeter scale). As such, a multi-scaled approach to understanding their provenance, maintenance and morphological characteristics is required. Research members of the Pavilion Lake Research Project (PLRP) (www.pavilionlake.com) have been working to understand microbialite morphogenesis in Pavilion Lake, B.C., Canada and the potential for biosignature preservation in these carbonate rocks using a combination of field and lab based techniques. PLRP research participants have been: (1) exploring the physical and chemical limnological properties of the lake, especially as these characteristics pertain to microbialite formation, (2) using geochemical and molecular tools to test the hypothesized biological origin of the microbialites and the associated meso-scale processes, and (3) using geochemical and microscopic tools to characterize potential biosignature preservation in the microbialites on the micro scale. To address these goals, PLRP identified the need to (a) map Pavilion Lake to gain a contextual understanding of microbialite distribution and possible correlation between their lake-wide distribution and the ambient growth conditions, and (b) sample the microbialites, including those from deepest regions of the lake (60m). Initial assessments showed that PLRP science diving operations did not prove adequate for mapping and sample recovery in the large and deep (0.8 km x 5.7 km; 65m max depth) lake. As such, the DeepWorker Science and Exploration (DSE) program was established by the PLRP. At the heart of this program are two DeepWorker (DW) submersibles, single-person vehicles that offer Scientist-Pilots (SP) an opportunity to study the lake in a 1 atm pressurized environment. In addition, the use of Autonomous Underwater Vehicles (AUVs) for landscape level geophysical mapping (side-scan and multibeam) provides and additional large-scale context of the microbialite associations. The multi-scaled approach undertaken by the PLRP team members has created an opportunity to weave together a comprehensive understanding of the modern microbialites in Pavilion Lake, and their relevance to interpreting ancient carbonate fabrics. An overview of the team’s findings to date and on-going research will be presented.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
On the calibration and use of Dual Electron Sensors for NASA's Magnetospheric MultiScale mission
NASA Astrophysics Data System (ADS)
Avanov, L. A.; Gliese, U.; Pollock, C. J.; Barrie, A.; Mariano, A. J.; Tucker, C. J.; Jacques, A. D.; Zeuch, M.; Shields, N.; Christian, K. D.
2013-12-01
The scientific target of NASA's Magnetospheric MultiScale (MMS) mission is to study the fundamentally important phenomenon of magnetic reconnection. Theoretical models of this process predict a small (order of ten kilometers) size for the diffusion region where electrons are demagnetized at the dayside magnetopause. Yet, the region may typically sweep over the spacecraft at relatively high speeds of 50km/s. That is why Fast Plasma Investigation (FPI) instrument suite must have extremely high time resolution for measurements of the 3D particle distribution functions. The Dual Electron Spectrometers (DESs) provide fast (30ms) 3D electron velocity distributions, from 10eV to 30,000 eV, as part of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission. This is accomplished by combining the measurements from eight different spectrometers (packaged in four dual sets) on each MMS spacecraft to produce each full distribution. This approach presents a new and challenging aspect to the calibration and operation of these instruments. The response uniformity among the spectrometer set, the consistency and reliability of their calibration in both sensitivity and their phase space selectivity (energy and angle), and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application. In this paper, we will present brief descriptions of the spectrometers and our approach their ground calibration, trended results of those calibrations, and our plans to detect, track, and respond to any temporal evolution in instrument performance through the life of the mission.
Multi-scaling modelling in financial markets
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.
2007-12-01
In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xuebing; Chen, Ting; Qi, Xintong
In this study, we developed a new method for in-situ pressure determination in multi-anvil, high-pressure apparatus using an acoustic travel time approach within the framework of acoustoelasticity. The ultrasonic travel times of polycrystalline Al{sub 2}O{sub 3} were calibrated against NaCl pressure scale up to 15 GPa and 900 °C in a Kawai-type double-stage multi-anvil apparatus in conjunction with synchrotron X-radiation, thereby providing a convenient and reliable gauge for pressure determination at ambient and high temperatures. The pressures derived from this new travel time method are in excellent agreement with those from the fixed-point methods. Application of this new pressure gauge in anmore » offline experiment revealed a remarkable agreement of the densities of coesite with those from the previous single crystal compression studies under hydrostatic conditions, thus providing strong validation for the current travel time pressure scale. The travel time approach not only can be used for continuous in-situ pressure determination at room temperature, high temperatures, during compression and decompression, but also bears a unique capability that none of the previous scales can deliver, i.e., simultaneous pressure and temperature determination with a high accuracy (±0.16 GPa in pressure and ±17 °C in temperature). Therefore, the new in-situ Al{sub 2}O{sub 3} pressure gauge is expected to enable new and expanded opportunities for offline laboratory studies of solid and liquid materials under high pressure and high temperature in multi-anvil apparatus.« less
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
2013-01-01
Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity
Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.
Green, Sara; Batterman, Robert
2017-02-01
A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.
A multi-scale approach for near-surface pavement cracking and failure mechanisms
DOT National Transportation Integrated Search
2010-10-31
Nearsurface cracking is one of the predominant distress types in flexible pavements. The occurrence of : nearsurface cracking, also sometimes referred to as topdown cracking, has increased in recent years : with the increased construction of...
A holistic approach for large-scale derived flood frequency analysis
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno
2017-04-01
Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.
Römer, Heinrich; Germain, Ryan R.
2013-01-01
Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal populations, and illustrate how multi-scale data can be used to identify preferred crossing sites for different species within a mammal community. PMID:24244912
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos; Kouli, Maria; Kalisperi, Despina
2018-03-01
The essential goals of this paper are to test the transient electromagnetic (TEM) response in a fractured geological complex medium and to better understand the physics introduced by associating a roughness parameter β to the geological formation. An anomalous fractional diffusion approach is incorporated to describe the electromagnetic induction in rough multi-scaled geological structures. The multi-scaling characteristics of Geropotamos basin in Crete are revealed through the analysis of transient step-off response of an EM loop antenna. The semi-empirical parameters derived from late-time TEM measurements are correlated with the multi-scale heterogeneities of the medium. Certain interesting properties of the late-time slope γ(β) and the power law of near surface resistivity distribution, as extracted from TEM inversion for different depth, are presented. The analysis of the parameter γ(β) which scales the induced voltage in the loop in the late stage of the electromagnetic response leads to a different view of the EM geophysical data interpretation. We show that it is strongly correlated with areas of high fracture density within the geological formations of the Geropotamos area. For that reason, it is proposed as a local multi-scaling empirical index. The results of this paper suggest that anomalous diffusion could be a viable physical mechanism for the fractal transport of charge carriers, explaining observed late-time TEM responses across a variety of natural geological settings.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
NASA Astrophysics Data System (ADS)
Veitinger, Jochen; Purves, Ross Stuart; Sovilla, Betty
2016-10-01
Avalanche hazard assessment requires a very precise estimation of the release area, which still depends, to a large extent, on expert judgement of avalanche specialists. Therefore, a new algorithm for automated identification of potential avalanche release areas was developed. It overcomes some of the limitations of previous tools, which are currently not often applied in hazard mitigation practice. By introducing a multi-scale roughness parameter, fine-scale topography and its attenuation under snow influence is captured. This allows the assessment of snow influence on terrain morphology and, consequently, potential release area size and location. The integration of a wind shelter index enables the user to define release area scenarios as a function of the prevailing wind direction or single storm events. A case study illustrates the practical usefulness of this approach for the definition of release area scenarios under varying snow cover and wind conditions. A validation with historical data demonstrated an improved estimation of avalanche release areas. Our method outperforms a slope-based approach, in particular for more frequent avalanches; however, the application of the algorithm as a forecasting tool remains limited, as snowpack stability is not integrated. Future research activity should therefore focus on the coupling of the algorithm with snowpack conditions.
A Multi-Scale Approach to Assess and Restore Ecosystems in a Watershed Context
2013-09-01
Laguna Formation .. Mehrten Formation .. Metamorphic Rocks , Undifferentiated .. Modesto Formation. Upper Unit c=J North Merced Gravel .. Riverbank...distribution is unlimited. The US Army Engineer Research and Development Center (ERDC) solves the nation’s toughest engineering and environmental...watershed assessment procedure that can be used to evaluate existing ecological conditions as well as proposed changes. The approach employs indicators
Retinex Image Processing: Improved Fidelity To Direct Visual Observation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
1996-01-01
Recorded color images differ from direct human viewing by the lack of dynamic range compression and color constancy. Research is summarized which develops the center/surround retinex concept originated by Edwin Land through a single scale design to a multi-scale design with color restoration (MSRCR). The MSRCR synthesizes dynamic range compression, color constancy, and color rendition and, thereby, approaches fidelity to direct observation.
ERIC Educational Resources Information Center
Zacamy, Jenna; Newman, Denis; Lazarev, Valeriy; Lin, Li
2015-01-01
This paper reports findings from a multi-year study of the scale-up of Reading Apprenticeship (RA), an approach to improve academic literacy by helping teachers provide the support students need to be successful readers in the content areas. WestEd's Strategic Literacy Initiative (SLI), began developing the program in 1995 and has since reached…
A multi-scaled approach for simulating chemical reaction systems.
Burrage, Kevin; Tian, Tianhai; Burrage, Pamela
2004-01-01
In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work. Copyright 2004 Elsevier Ltd.
Markwalter, Chester E; Prud'homme, Robert K
2018-05-14
Flash NanoPrecipitation (FNP) is a scalable approach to generate polymeric nanoparticles using rapid micromixing in specially-designed geometries such as a confined impinging jets (CIJ) mixer or a Multi-Inlet Vortex Mixer (MIVM). A major limitation of formulation screening using the MIVM is that a single run requires tens of milligrams of the therapeutic. To overcome this, we have developed a scaled-down version of the MIVM, requiring as little as 0.2 mg of therapeutic, for formulation screening. The redesigned mixer can then be attached to pumps for scale-up of the identified formulation. It was shown that Reynolds Number allowed accurate scaling between the two MIVM designs. The utility of the small-scale MIVM for formulation development was demonstrated through the encapsulation of a number of hydrophilic macromolecules using inverse Flash NanoPrecipitation with target loadings as high as 50% by mass. Copyright © 2018. Published by Elsevier Inc.
Upadhyay, Manas V.; Patra, Anirban; Wen, Wei; ...
2018-05-08
In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhyay, Manas V.; Patra, Anirban; Wen, Wei
In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less
Towards a Multi-Resolution Model of Seismic Risk in Central Asia. Challenge and perspectives
NASA Astrophysics Data System (ADS)
Pittore, M.; Wieland, M.; Bindi, D.; Parolai, S.
2011-12-01
Assessing seismic risk, defined as the probability of occurrence of economical and social losses as consequence of an earthquake, both at regional and at local scale is a challenging, multi-disciplinary task. In order to provide a reliable estimate, diverse information must be gathered by seismologists, geologists, engineers and civil authorities, and carefully integrated keeping into account the different levels of uncertainty. The research towards an integrated methodology, able to seamlessly describe seismic risk at different spatial scales is challenging, but discloses new application perspectives, particularly in those countries which suffer from a relevant seismic hazard but do not have resources for a standard assessment. Central Asian countries in particular, which exhibit one of the highest seismic hazard in the world, are experiencing a steady demographic growth, often accompanied by informal settlement and urban sprawling. A reliable evaluation of how these factors affect the seismic risk, together with a realistic assessment of the assets exposed to seismic hazard and their structural vulnerability is of particular importance, in order to undertake proper mitigation actions and to promptly and efficiently react to a catastrophic event. New strategies are needed to efficiently cope with systematic lack of information and uncertainties. An original approach is presented to assess seismic risk based on integration of information coming from remote-sensing and ground-based panoramic imaging, in situ measurements, expert knowledge and already available data. Efficient sampling strategies based on freely available medium-resolution multi-spectral satellite images are adopted to optimize data collection and validation, in a multi-scale approach. Panoramic imaging is also considered as a valuable ground-based visual data collection technique, suitable both for manual and automatic analysis. A full-probabilistic framework based on Bayes Network is proposed to integrate available information taking into account both aleatory and epistemic uncertainties. An improved risk model for the capital of Kyrgyz Republic, Biskek, has been developed following this approach and tested based on different earthquake scenarios. Preliminary results will be presented and discussed.
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Silvestri, M.; Musacchio, M.
2017-12-01
In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).
Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup
NASA Astrophysics Data System (ADS)
Pathak, Rohit; Joshi, Satyadhar
In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.
Sage-grouse habitat assessment framework: multi-scale habitat assessment tool
USDA-ARS?s Scientific Manuscript database
This document provides policymakers, resource managers, and specialists with a comprehensive framework for assessing sage-grouse habitat in the sagebrush ecosystem. Four pillars form the foundation for the success of this approach: science, effective conservation policy, implementation, and adapti...
A hybrid approach for efficient anomaly detection using metaheuristic methods
Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.
2014-01-01
Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752
A hybrid approach for efficient anomaly detection using metaheuristic methods.
Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M
2015-07-01
Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
Multiscale Analysis of Delamination of Carbon Fiber-Epoxy Laminates with Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Riddick, Jaret C.; Frankland, SJV; Gates, TS
2006-01-01
A multi-scale analysis is presented to parametrically describe the Mode I delamination of a carbon fiber/epoxy laminate. In the midplane of the laminate, carbon nanotubes are included for the purposes of selectively enhancing the fracture toughness of the laminate. To analyze carbon fiber epoxy carbon nanotube laminate, the multi-scale methodology presented here links a series of parameterizations taken at various length scales ranging from the atomistic through the micromechanical to the structural level. At the atomistic scale molecular dynamics simulations are performed in conjunction with an equivalent continuum approach to develop constitutive properties for representative volume elements of the molecular structure of components of the laminate. The molecular-level constitutive results are then used in the Mori-Tanaka micromechanics to develop bulk properties for the epoxy-carbon nanotube matrix system. In order to demonstrate a possible application of this multi-scale methodology, a double cantilever beam specimen is modeled. An existing analysis is employed which uses discrete springs to model the fiber bridging affect during delamination propagation. In the absence of empirical data or a damage mechanics model describing the effect of CNTs on fracture toughness, several tractions laws are postulated, linking CNT volume fraction to fiber bridging in a DCB specimen. Results from this demonstration are presented in terms of DCB specimen load-displacement responses.
NASA Astrophysics Data System (ADS)
Dietrich, Jörg
2016-05-01
In integrated river basin management, measures for reaching the environmental objectives can be evaluated at different scales, and according to multiple criteria of different nature (e.g. ecological, economic, social). Decision makers, including responsible authorities and stakeholders, follow different interests regarding criteria and scales. With a bottom up approach, the multi criteria assessment could produce a different outcome than with a top down approach. The first assigns more power to the local community, which is a common principle of IWRM. On the other hand, the development of an overall catchment strategy could potentially make use of synergetic effects of the measures, which fulfils the cost efficiency requirement at the basin scale but compromises local interests. Within a joint research project for the 5500 km2 Werra river basin in central Germany, measures have been planned to reach environmental objectives of the European Water Framework directive (WFD) regarding ecological continuity and nutrient loads. The main criteria for the evaluation of the measures were costs of implementation, reduction of nutrients, ecological benefit and social acceptance. The multi-criteria evaluation of the catchment strategies showed compensation between positive and negative performance of criteria within the catchment, which in the end reduced the discriminative power of the different strategies. Furthermore, benefit criteria are partially computed for the whole basin only. Both ecological continuity and nutrient load show upstream-downstream effects in opposite direction. The principles of "polluter pays" and "overall cost efficiency" can be followed for the reduction of nutrient losses when financial compensations between upstream and downstream users are made, similar to concepts of emission trading.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
Wang, Guohua; Liu, Qiong
2015-01-01
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only. PMID:26703611
Wang, Guohua; Liu, Qiong
2015-12-21
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Multi-scale model for the hierarchical architecture of native cellulose hydrogels.
Martínez-Sanz, Marta; Mikkelsen, Deirdre; Flanagan, Bernadine; Gidley, Michael J; Gilbert, Elliot P
2016-08-20
The structure of protiated and deuterated cellulose hydrogels has been investigated using a multi-technique approach combining small-angle scattering with diffraction, spectroscopy and microscopy. A model for the multi-scale structure of native cellulose hydrogels is proposed which highlights the essential role of water at different structural levels characterised by: (i) the existence of cellulose microfibrils containing an impermeable crystalline core surrounded by a partially hydrated paracrystalline shell, (ii) the creation of a strong network of cellulose microfibrils held together by hydrogen bonding to form cellulose ribbons and (iii) the differential behaviour of tightly bound water held within the ribbons compared to bulk solvent. Deuterium labelling provides an effective platform on which to further investigate the role of different plant cell wall polysaccharides in cellulose composite formation through the production of selectively deuterated cellulose composite hydrogels. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
NASA Astrophysics Data System (ADS)
Siegert, Stefan
2017-04-01
Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.
2014-12-01
In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.
From seconds to months: an overview of multi-scale dynamics of mobile telephone calls
NASA Astrophysics Data System (ADS)
Saramäki, Jari; Moro, Esteban
2015-06-01
Big Data on electronic records of social interactions allow approaching human behaviour and sociality from a quantitative point of view with unforeseen statistical power. Mobile telephone Call Detail Records (CDRs), automatically collected by telecom operators for billing purposes, have proven especially fruitful for understanding one-to-one communication patterns as well as the dynamics of social networks that are reflected in such patterns. We present an overview of empirical results on the multi-scale dynamics of social dynamics and networks inferred from mobile telephone calls. We begin with the shortest timescales and fastest dynamics, such as burstiness of call sequences between individuals, and "zoom out" towards longer temporal and larger structural scales, from temporal motifs formed by correlated calls between multiple individuals to long-term dynamics of social groups. We conclude this overview with a future outlook.
Multi-scale structures of turbulent magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.
2016-05-15
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less
Multi-scale structures of turbulent magnetic reconnection
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.
2016-05-01
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.
Munayer, Salim J; Horenczyk, Gabriel
2014-10-01
Grounded in a contextual approach to acculturation of minorities, this study examines changes in acculturation orientations among Palestinian Christian Arab adolescents in Israel following the "lost decade of Arab-Jewish coexistence." Multi-group acculturation orientations among 237 respondents were assessed vis-à-vis two majorities--Muslim Arabs and Israeli Jews--and compared to 1998 data. Separation was the strongest endorsed orientation towards both majority groups. Comparisons with the 1998 data also show a weakening of the Integration attitude towards Israeli Jews, and also distancing from Muslim Arabs. For the examination of the "Westernisation" hypothesis, multi-dimensional scaling (MDS) analyses of perceptions of Self and group values clearly showed that, after 10 years, Palestinian Christian Arabs perceive Israeli Jewish culture as less close to Western culture, and that Self and the Christian Arab group have become much closer, suggesting an increasing identification of Palestinian Christian Arab adolescents with their ethnoreligious culture. We discuss the value of a multi-group, multi-method, and multi-wave approach to the examination of the role of the political context in acculturation processes. © 2014 International Union of Psychological Science.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
, multi-scale observing systems under challenging field conditions to document unexpectedly large soil CO2 pleased to recognize the Building Technology and Urban Systems Division's Retro-commissioning Sensor synthetic biology while providing novel approaches for crop engineering to support Berkeley Lab and DOE's
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2014-12-01
Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
From Single-Cell Dynamics to Scaling Laws in Oncology
NASA Astrophysics Data System (ADS)
Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo
We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.
NASA Astrophysics Data System (ADS)
Meneveau, C. V.; Bai, K.; Katz, J.
2011-12-01
The vegetation canopy has a significant impact on various physical and biological processes such as forest microclimate, rainfall evaporation distribution and climate change. Most scaled laboratory experimental studies have used canopy element models that consist of rigid vertical strips or cylindrical rods that can be typically represented through only one or a few characteristic length scales, for example the diameter and height for cylindrical rods. However, most natural canopies and vegetation are highly multi-scale with branches and sub-branches, covering a wide range of length scales. Fractals provide a convenient idealization of multi-scale objects, since their multi-scale properties can be described in simple ways (Mandelbrot 1982). While fractal aspects of turbulence have been studied in several works in the past decades, research on turbulence generated by fractal objects started more recently. We present an experimental study of boundary layer flow over fractal tree-like objects. Detailed Particle-Image-Velocimetry (PIV) measurements are carried out in the near-wake of a fractal-like tree. The tree is a pre-fractal with five generations, with three branches and a scale reduction factor 1/2 at each generation. Its similarity fractal dimension (Mandelbrot 1982) is D ~ 1.58. Detailed mean velocity and turbulence stress profiles are documented, as well as their downstream development. We then turn attention to the turbulence mixing properties of the flow, specifically to the question whether a mixing length-scale can be identified in this flow, and if so, how it relates to the geometric length-scales in the pre-fractal object. Scatter plots of mean velocity gradient (shear) and Reynolds shear stress exhibit good linear relation at all locations in the flow. Therefore, in the transverse direction of the wake evolution, the Boussinesq eddy viscosity concept is appropriate to describe the mixing. We find that the measured mixing length increases with increasing streamwise locations. Conversely, the measured eddy viscosity and mixing length decrease with increasing elevation, which differs from eddy viscosity and mixing length behaviors of traditional boundary layers or canopies studied before. In order to find an appropriate length for the flow, several models based on the notion of superposition of scales are proposed and examined. One approach is based on spectral distributions. Another more practical approach is based on length-scale distributions evaluated using fractal geometry tools. These proposed models agree well with the measured mixing length. The results indicate that information about multi-scale clustering of branches as it occurs in fractals has to be incorporated into models of the mixing length for flows through canopies with multiple scales. The research is supported by National Science Foundation grant ATM-0621396 and AGS-1047550.
Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map
NASA Astrophysics Data System (ADS)
AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong
2017-04-01
The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by using truncated spectra or model spectra as the input. Analyses show that most of the SGS statistics agree well with those from MTLM fields with DNS spectra as the input. For the mean SGS energy dissipation, some significant deviation is observed. However, it is shown that the deviation can be parametrized by the input energy spectrum, which demonstrates the robustness of the MTLM procedure.
Monson, Daniel H.; Bowen, Lizabeth
2015-01-01
Overall, a variety of indices used to measure population status throughout the sea otter’s range have provided insights for understanding the mechanisms driving the trajectory of various sea otter populations, which a single index could not, and we suggest using multiple methods to measure a population’s status at multiple spatial and temporal scales. The work described here also illustrates the usefulness of long-term data sets and/or approaches that can be used to assess population status retrospectively, providing information otherwise not available. While not all systems will be as amenable to using all the approaches presented here, we expect innovative researchers could adapt analogous multi-scale methods to a broad range of habitats and species including apex predators occupying the top trophic levels, which are often of conservation concern.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
ERIC Educational Resources Information Center
Rojahn, Johannes; Schroeder, Stephen R.; Mayo-Ortega, Liliana; Oyama-Ganiko, Rosao; LeBlanc, Judith; Marquis, Janet; Berke, Elizabeth
2013-01-01
Reliable and valid assessment of aberrant behaviors is essential in empirically verifying prevention and intervention for individuals with intellectual or developmental disabilities (IDD). Few instruments exist which assess behavior problems in infants. The current longitudinal study examined the performance of three behavior-rating scales for…
2018-02-15
models and approaches are also valid using other invasive and non - invasive technologies. Finally, we illustrate and experimentally evaluate this...2017 Project Outline q Pattern formation diversity in wild microbial societies q Experimental and mathematical analysis methodology q Skeleton...chemotaxis, nutrient degradation, and the exchange of amino acids between cells. Using both quantitative experimental methods and several theoretical
NASA Astrophysics Data System (ADS)
Bonetti, Rita M.; Reinfelds, Ivars V.; Butler, Gavin L.; Walsh, Chris T.; Broderick, Tony J.; Chisholm, Laurie A.
2016-05-01
Natural barriers such as waterfalls, cascades, rapids and riffles limit the dispersal and in-stream range of migratory fish, yet little is known of the interplay between these gradient dependent landforms, their hydraulic characteristics and flow rates that facilitate fish passage. The resurgence of dam construction in numerous river basins world-wide provides impetus to the development of robust techniques for assessment of the effects of downstream flow regime changes on natural fish passage barriers and associated consequences as to the length of rivers available to migratory species. This paper outlines a multi-scale technique for quantifying the relative magnitude of natural fish passage barriers in river systems and flow rates that facilitate passage by fish. First, a GIS-based approach is used to quantify channel gradients for the length of river or reach under investigation from a high resolution DEM, setting the magnitude of identified passage barriers in a longer context (tens to hundreds of km). Second, LiDAR, topographic and bathymetric survey-based hydrodynamic modelling is used to assess flow rates that can be regarded as facilitating passage across specific barriers identified by the river to reach scale gradient analysis. Examples of multi-scale approaches to fish passage assessment for flood-flow and low-flow passage issues are provided from the Clarence and Shoalhaven Rivers, NSW, Australia. In these river systems, passive acoustic telemetry data on actual movements and migrations by Australian bass (Macquaria novemaculeata) provide a means of validating modelled assessments of flow rates associated with successful fish passage across natural barriers. Analysis of actual fish movements across passage barriers in these river systems indicates that two dimensional hydraulic modelling can usefully quantify flow rates associated with the facilitation of fish passage across natural barriers by a majority of individual fishes for use in management decisions regarding environmental or instream flows.
NASA Astrophysics Data System (ADS)
Breuillard, H.; Aunai, N.; Le Contel, O.; Catapano, F.; Alexandrova, A.; Retino, A.; Cozzani, G.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Lindqvist, P. A.; Ergun, R.; Strangeway, R. J.; Russell, C. T.; Magnes, W.; Plaschke, F.; Nakamura, R.; Fuselier, S. A.; Turner, D. L.; Schwartz, S. J.; Torbert, R. B.; Burch, J.
2017-12-01
Transient and localized jets of hot plasma, also known as Bursty Bulk Flows (BBFs), play a crucial role in Earth's magnetotail dynamics because the energy input from the solar wind is partly dissipated in their vicinity, notably in their embedded dipolarization front (DF). This dissipation is in the form of strong low-frequency waves that can heat and accelerate energetic particles up to the high-latitude plasma sheet. The ion-scale dynamics of BBFs have been revealed by the Cluster and THEMIS multi-spacecraft missions. However, the dynamics of BBF propagation in the magnetotail are still under debate due to instrumental limitations and spacecraft separation distances, as well as simulation limitations. The NASA/MMS fleet, which features unprecedented high time resolution instruments and four spacecraft separated by kinetic-scale distances, has also shown recently that the DF normal dynamics and its associated emissions are below the ion gyroradius scale in this region. Large variations in the dawn-dusk direction were also observed. However, most of large-scale simulations are using the MHD approach and are assumed 2D in the XZ plane. Thus, in this study we take advantage of both multi-spacecraft observations by MMS and large-scale 3D hybrid simulations to investigate the 3D dynamics of BBFs and their associated emissions at ion-scale in Earth's magnetotail, and their impact on particle heating and acceleration.
NASA Astrophysics Data System (ADS)
Dekavalla, Maria; Argialas, Demetre
2017-07-01
The analysis of undersea topography and geomorphological features provides necessary information to related disciplines and many applications. The development of an automated knowledge-based classification approach of undersea topography and geomorphological features is challenging due to their multi-scale nature. The aim of the study is to develop and evaluate an automated knowledge-based OBIA approach to: i) decompose the global undersea topography to multi-scale regions of distinct morphometric properties, and ii) assign the derived regions to characteristic geomorphological features. First, the global undersea topography was decomposed through the SRTM30_PLUS bathymetry data to the so-called morphometric objects of discrete morphometric properties and spatial scales defined by data-driven methods (local variance graphs and nested means) and multi-scale analysis. The derived morphometric objects were combined with additional relative topographic position information computed with a self-adaptive pattern recognition method (geomorphons), and auxiliary data and were assigned to characteristic undersea geomorphological feature classes through a knowledge base, developed from standard definitions. The decomposition of the SRTM30_PLUS data to morphometric objects was considered successful for the requirements of maximizing intra-object and inter-object heterogeneity, based on the near zero values of the Moran's I and the low values of the weighted variance index. The knowledge-based classification approach was tested for its transferability in six case studies of various tectonic settings and achieved the efficient extraction of 11 undersea geomorphological feature classes. The classification results for the six case studies were compared with the digital global seafloor geomorphic features map (GSFM). The 11 undersea feature classes and their producer's accuracies in respect to the GSFM relevant areas were Basin (95%), Continental Shelf (94.9%), Trough (88.4%), Plateau (78.9%), Continental Slope (76.4%), Trench (71.2%), Abyssal Hill (62.9%), Abyssal Plain (62.4%), Ridge (49.8%), Seamount (48.8%) and Continental Rise (25.4%). The knowledge-based OBIA classification approach was considered transferable since the percentages of spatial and thematic agreement between the most of the classified undersea feature classes and the GSFM exhibited low deviations across the six case studies.
Shi, Yajuan; Wang, Ruoshi; Lu, Yonglong; Song, Shuai; Johnson, Andrew C; Sweetman, Andrew; Jones, Kevin
2016-09-01
Ecological risk assessment (ERA) has been widely applied in characterizing the risk of chemicals to organisms and ecosystems. The paucity of toxicity data on local biota living in the different compartments of an ecosystem and the absence of a suitable methodology for multi-compartment spatial risk assessment at the regional scale has held back this field. The major objective of this study was to develop a methodology to quantify and distinguish the spatial distribution of risk to ecosystems at a regional scale. A framework for regional multi-compartment probabilistic ecological risk assessment (RMPERA) was constructed and corroborated using a bioassay of a local species. The risks from cadmium (Cd) pollution in river water, river sediment, coastal water, coastal surface sediment and soil in northern Bohai Rim were examined. The results indicated that the local organisms in soil, river, coastal water, and coastal sediment were affected by Cd. The greatest impacts from Cd were identified in the Tianjin and Huludao areas. The overall multi-compartment risk was 31.4% in the region. The methodology provides a new approach for regional multi-compartment ecological risk assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Saura, Santiago; Rondinini, Carlo
2016-01-01
One of the biggest challenges in large-scale conservation is quantifying connectivity at broad geographic scales and for a large set of species. Because connectivity analyses can be computationally intensive, and the planning process quite complex when multiple taxa are involved, assessing connectivity at large spatial extents for many species turns to be often intractable. Such limitation results in that conducted assessments are often partial by focusing on a few key species only, or are generic by considering a range of dispersal distances and a fixed set of areas to connect that are not directly linked to the actual spatial distribution or mobility of particular species. By using a graph theory framework, here we propose an approach to reduce computational effort and effectively consider large assemblages of species in obtaining multi-species connectivity priorities. We demonstrate the potential of the approach by identifying defragmentation priorities in the Italian road network focusing on medium and large terrestrial mammals. We show that by combining probabilistic species graphs prior to conducting the network analysis (i) it is possible to analyse connectivity once for all species simultaneously, obtaining conservation or restoration priorities that apply for the entire species assemblage; and that (ii) those priorities are well aligned with the ones that would be obtained by aggregating the results of separate connectivity analysis for each of the individual species. This approach offers great opportunities to extend connectivity assessments to large assemblages of species and broad geographic scales. PMID:27768718
Surface chemistry and microscopy of food powders
NASA Astrophysics Data System (ADS)
Burgain, Jennifer; Petit, Jeremy; Scher, Joël; Rasch, Ron; Bhandari, Bhesh; Gaiani, Claire
2017-12-01
Despite high industrial and scientific interest, a comprehensive review of the surface science of food powders is still lacking. There is a real gap between scientific concerns of the field and accessible reviews on the subject. The global description of the surface of food powders by multi-scale microscopy approaches seems to be essential in order to investigate their complexity and take advantage of their high innovation potential. Links between these techniques and the interest to develop a multi-analytical approach to investigate scientific questions dealing with powder functionality are discussed in the second part of the review. Finally, some techniques used in others fields and showing promising possibilities in the food powder domain will be highlighted.
Assessing Wetland Anthropogenic Stress using GIS; a Multi-scale Watershed Approach
Watersheds are widely recognized as essential summary units for ecosystem research and management, particularly in aquatic systems. As the drainage basin in which surface water drains toward a lake, stream, river, or wetland at a lower elevation, watersheds represent spatially e...
Epithelial perturbation by inhaled chlorine: Multi-scale mechanistic modeling in rats and humans
Chlorine is a high-production volume, hazardous air pollutant and irritant gas of interest to homeland security. Thus, scenarios of interest for risk characterization range from acute high-level exposures to lower-level chronic exposures. Risk assessment approaches to estimate ...
A multi\\0x2010scale approach for near\\0x2010surface pavement cracking and failure mechanisms.
DOT National Transportation Integrated Search
2010-11-30
Nearsurface cracking is one of the predominant distress types in flexible pavements. The occurrence of : nearsurface cracking, also sometimes referred to as topdown cracking, has increased in recent years : with the increased construction of...
Understanding the source of multifractality in financial markets
NASA Astrophysics Data System (ADS)
Barunik, Jozef; Aste, Tomaso; Di Matteo, T.; Liu, Ruipeng
2012-09-01
In this paper, we use the generalized Hurst exponent approach to study the multi-scaling behavior of different financial time series. We show that this approach is robust and powerful in detecting different types of multi-scaling. We observe a puzzling phenomenon where an apparent increase in multifractality is measured in time series generated from shuffled returns, where all time-correlations are destroyed, while the return distributions are conserved. This effect is robust and it is reproduced in several real financial data including stock market indices, exchange rates and interest rates. In order to understand the origin of this effect we investigate different simulated time series by means of the Markov switching multifractal model, autoregressive fractionally integrated moving average processes with stable innovations, fractional Brownian motion and Levy flights. Overall we conclude that the multifractality observed in financial time series is mainly a consequence of the characteristic fat-tailed distribution of the returns and time-correlations have the effect to decrease the measured multifractality.
Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav
2015-01-28
We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.
New insights into the multi-scale climatic drivers of the "Karakoram anomaly"
NASA Astrophysics Data System (ADS)
Collier, S.; Moelg, T.; Nicholson, L. I.; Maussion, F.; Scherer, D.; Bush, A. B.
2012-12-01
Glacier behaviour in the Karakoram region of the northwestern Himalaya shows strong spatial and temporal heterogeneity and, in some basins, anomalous trends compared with glaciers elsewhere in High Asia. Our knowledge of the mass balance fluctuations of Karakoram glaciers as well as of the important driving factors and interactions between them is limited by a scarcity of in-situ measurements and other studies. Here we employ a novel approach to simulating atmosphere-cryosphere interactions - coupled high-resolution atmospheric and physically-based surface mass balance modelling - to examine the surface energy and mass fluxes of glaciers in this region. We discuss the mesoscale climatic drivers behind surface mass balance fluctuations as well as the influence of local forcing factors, such as debris cover and feedbacks from the glacier surface to the atmosphere. The coupled modelling approach therefore provides an innovative, multi-scale solution to the paucity of information we have to date on the much-debated "Karakoram anomaly."
Multi-criteria decision making--an approach to setting priorities in health care.
Nobre, F F; Trotta, L T; Gomes, L F
1999-12-15
The objective of this paper is to present a multi-criteria decision making (MCDM) approach to support public health decision making that takes into consideration the fuzziness of the decision goals and the behavioural aspect of the decision maker. The approach is used to analyse the process of health technology procurement in a University Hospital in Rio de Janeiro, Brazil. The method, known as TODIM, relies on evaluating alternatives with a set of decision criteria assessed using an ordinal scale. Fuzziness in generating criteria scores and weights or conflicts caused by dealing with different viewpoints of a group of decision makers (DMs) are solved using fuzzy set aggregation rules. The results suggested that MCDM models, incorporating fuzzy set approaches, should form a set of tools for public health decision making analysis, particularly when there are polarized opinions and conflicting objectives from the DM group. Copyright 1999 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, V.R.; Lee, L.K.; Stalzer, R.H.
1995-12-31
The development of Catalytic Multi-Stage Liquefaction (CMSL) at HTI has focused on both bituminous and sub-bituminous coals using laboratory, bench and PDU scale operations. The crude oil equivalent cost of liquid fuels from coal has been curtailed to about $30 per barrel, thus achieving over 30% reduction in the price that was evaluated for the liquefaction technologies demonstrated in the late seventies and early eighties. Contrary to the common belief, the new generation of catalytic multistage coal liquefaction process is environmentally very benign and can produce clean, premium distillates with a very low (<10ppm) heteroatoms content. The HTI Staff hasmore » been involved over the years in process development and has made significant improvements in the CMSL processing of coals. A 24 month program (extended to September 30, 1995) to study novel concepts, using a continuous bench scale Catalytic Multi-Stage unit (30kg coal/day), has been initiated since December, 1992. This program consists of ten bench-scale operations supported by Laboratory Studies, Modelling, Process Simulation and Economic Assessments. The Catalytic Multi-Stage Liquefaction is a continuation of the second generation yields using a low/high temperature approach. This paper covers work performed between October 1994- August 1995, especially results obtained from the microautoclave support activities and the bench-scale operations for runs CMSL-08 and CMSL-09, during which, coal and the plastic components for municipal solid wastes (MSW) such as high density polyethylene (HDPE)m, polypropylene (PP), polystyrene (PS), and polythylene terphthlate (PET) were coprocessed.« less
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Dubra, Alfredo; Tam, Johnny
2016-03-01
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
Image Description with Local Patterns: An Application to Face Recognition
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro
In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.
Progress in the Phase 0 Model Development of a STAR Concept for Dynamics and Control Testing
NASA Technical Reports Server (NTRS)
Woods-Vedeler, Jessica A.; Armand, Sasan C.
2003-01-01
The paper describes progress in the development of a lightweight, deployable passive Synthetic Thinned Aperture Radiometer (STAR). The spacecraft concept presented will enable the realization of 10 km resolution global soil moisture and ocean salinity measurements at 1.41 GHz. The focus of this work was on definition of an approximately 1/3-scaled, 5-meter Phase 0 test article for concept demonstration and dynamics and control testing. Design requirements, parameters and a multi-parameter, hybrid scaling approach for the dynamically scaled test model were established. The El Scaling Approach that was established allows designers freedom to define the cross section of scaled, lightweight structural components that is most convenient for manufacturing when the mass of the component is small compared to the overall system mass. Static and dynamic response analysis was conducted on analytical models to evaluate system level performance and to optimize panel geometry for optimal tension load distribution.
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
Multi-scale genetic dynamic modelling I : an algorithm to compute generators.
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).
Tschakert, P.; Tappan, G.
2004-01-01
This paper presents the results of a multi-scale investigation of environmental change in the Old Peanut Basin of Senegal throughout the 20th century. Based on historical accounts, ethnographies, aerial photos, satellite images, field and household surveys as well as various participatory research activities with farmers in selected villages, the study attempts to make explicit layered scales of analysis, both temporally and spatially. It shows that, despite some general trends of resource degradation in the Old Peanut Basin, local farming systems have embarked on different pathways of change to adapt to their evolving environment. It also illustrates that high diversity with respect to soil fertility management exists at the farm and household level. Finally, the paper proposes a farmer-oriented approach to carbon sequestration in order to integrate recommended technical options more efficiently into the complex and dynamic livelihoods of smallholders in dryland environments. This approach includes pathway-specific land use and management options at the level of farming systems and, at the level of individual households, a basket of possible practices from which farmers can choose depending on their multiple needs, capacities, and adaptive strategies to cope with risk and uncertainty.
The 300 Area Integrated Field Research Challenge Quality Assurance Project Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, N. J.
Pacific Northwest National Laboratory and a group of expert collaborators are using the U.S. Department of Energy Hanford Site 300 Area uranium plume within the footprint of the 300-FF-5 groundwater operable unit as a site for an Integrated Field-Scale Subsurface Research Challenge (IFRC). The IFRC is entitled Multi-Scale Mass Transfer Processes Controlling Natural Attenuation and Engineered Remediation: An IFRC Focused on the Hanford Site 300 Area Uranium Plume Project. The theme is investigation of multi-scale mass transfer processes. A series of forefront science questions on mass transfer are posed for research that relate to the effect of spatial heterogeneities; themore » importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements/approaches needed to characterize and model a mass transfer-dominated system. This Quality Assurance Project Plan provides the quality assurance requirements and processes that will be followed by the 300 Area IFRC Project. This plan is designed to be used exclusively by project staff.« less
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
Organizing phenological data resources to inform natural resource conservation
Rosemartin, Alyssa H.; Crimmins, Theresa M.; Enquist, Carolyn A.F.; Gerst, Katharine L.; Kellermann, Jherime L.; Posthumus, Erin E.; Denny, Ellen G.; Guertin, Patricia; Marsh, Lee; Weltzin, Jake F.
2014-01-01
Changes in the timing of plant and animal life cycle events, in response to climate change, are already happening across the globe. The impacts of these changes may affect biodiversity via disruption to mutualisms, trophic mismatches, invasions and population declines. To understand the nature, causes and consequences of changed, varied or static phenologies, new data resources and tools are being developed across the globe. The USA National Phenology Network is developing a long-term, multi-taxa phenological database, together with a customizable infrastructure, to support conservation and management needs. We present current and potential applications of the infrastructure, across scales and user groups. The approaches described here are congruent with recent trends towards multi-agency, large-scale research and action.
NASA Astrophysics Data System (ADS)
Rainville, L.; Farrar, J. T.; Shcherbina, A.; Centurioni, L. R.
2017-12-01
The Salinity Processes in the Upper-ocean Regional Study (SPURS) is a program aimed at understanding the patterns and variability of sea surface salinity. Following the first SPURS program in an evaporation-dominated region (2012-2013), the SPURS-2 program targeted wide range of spatial and temporal scales associated with processes controlling salinity in the rain-dominated Eastern Pacific Fresh Pool. Autonomous instruments were delivered in August and September 2016 using research vessels conducted observations over one complete annual cycle. The SPURS-2 field program used coordinated observations from many different autonomous platforms, and a mix of Lagrangian and Eulerian approaches. Here we discuss the motivation, implementation, and the early of SPURS-2.
W49A: A Massive Molecular Cloud Forming a Massive Star Cluster in the Galactic Disk
NASA Astrophysics Data System (ADS)
Galvan-Madrid, Roberto; Liu, Hauyu Baobab; Pineda, Jaime E.; Zhang, Zhi-Yu; Ginsburg, Adam; Roman-Zuñiga, Carlos; Peters, Thomas
2015-08-01
I summarize our current results of the MUSCLE survey of W49A, the most luminous star formation region in the Milky Way. Our approach emphasizes multi-scale, multi-resolution imaging in dust, ionized-, and molecular gas, to trace the multiple gas components from <0.1 pc (core scale) all the way up to the scale of the entire giant molecular cloud (GMC), ˜100 pc. The 106 M⊙ GMC is structured in a radial network of filaments that converges toward the central 'hub' with ˜2x105 M⊙, which contains within a few pc a deeply embedded young massive cluster (YMC) of stellar mass ~5x104 M⊙. We also discuss the dynamics of the filamentary network, the role of turbulence in the formation of this YMC, and how objects like W49A can link Milky Way and extragalactic star formation relations.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu
2017-07-01
Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation
NASA Astrophysics Data System (ADS)
Sakamoto, M.; Honda, Y.; Kondo, A.
2016-06-01
From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Joseph D. Holbrook; John R. Squires; Lucretia E. Olson; Rick L. Lawrence; Shannon L. Savage
2016-01-01
Snowshoe hares (Lepus americanus) are an ecologically important herbivore because they modify vegetation through browsing and serve as a prey resource for multiple predators. We implemented a multiscale approach to characterize habitat relationships for snowshoe hares across the mixed conifer landscape of the northern Rocky Mountains, USA. Our objectives were to (1)...
A Multi-scale Cognitive Approach to Intrusion Detection and Response
2015-12-28
the behavior of the traffic on the network, either by using mathematical formulas or by replaying packet streams. As a result, simulators depend...large scale. Summary of the most important results We obtained a powerful machine, which has 768 cores and 1.25 TB memory . RBG has been...time. Each client is configured with 1GB memory , 10 GB disk space, and one 100M Ethernet interface. The server nodes include web servers
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
A Re-Unification of Two Competing Models for Document Retrieval.
ERIC Educational Resources Information Center
Bodoff, David
1999-01-01
Examines query-oriented versus document-oriented information retrieval and feedback learning. Highlights include a reunification of the two approaches for probabilistic document retrieval and for vector space model (VSM) retrieval; learning in VSM and in probabilistic models; multi-dimensional scaling; and ongoing field studies. (LRW)
Jeanne C. Chambers; Jeffrey L. Beck; Steve Campbell; John Carlson; Thomas J. Christiansen; Karen J. Clause; Jonathan B. Dinkins; Kevin E. Doherty; Kathleen A. Griffin; Douglas W. Havlina; Kenneth F. Henke; Jacob D. Hennig; Laurie L. Kurth; Jeremy D. Maestas; Mary Manning; Kenneth E. Mayer; Brian A. Mealor; Clinton McCarthy; Marco A. Perea; David A. Pyke
2016-01-01
This report provides a strategic approach developed by a Western Association of Fish and Wildlife Agencies interagency working group for conservation of sagebrush ecosystems, Greater sage-grouse, and Gunnison sage-grouse. It uses information on (1) factors that influence sagebrush ecosystem resilience to disturbance and resistance to nonnative invasive annual grasses...
2016-07-01
characteristics and to examine the sensitivity of using such techniques for evaluating microstructure. In addition to the GUI tool, a manual describing its use has... Evaluating Local Primary Dendrite Arm Spacing Characterization Techniques Using Synthetic Directionally Solidified Dendritic Microstructures, Metallurgical and...driven approach for quanti - fying materials uncertainty in creep deformation and failure of aerspace materials, Multi-scale Structural Mechanics and
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
NASA Astrophysics Data System (ADS)
Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.
2015-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
NASA Astrophysics Data System (ADS)
Marconi, S.; Collalti, A.; Santini, M.; Valentini, R.
2013-12-01
3D-CMCC-Forest Ecosystem Model is a process based model formerly developed for complex forest ecosystems to estimate growth, water and carbon cycles, phenology and competition processes on a daily/monthly time scale. The Model integrates some characteristics of the functional-structural tree models with the robustness of the light use efficiency approach. It treats different heights, ages and species as discrete classes, in competition for light (vertical structure) and space (horizontal structure). The present work evaluates the results of the recently developed daily version of 3D-CMCC-FEM for two neighboring different even aged and mono specific study cases. The former is a heterogeneous Pedunculate oak forest (Quercus robur L. ), the latter a more homogeneous Scot pine forest (Pinus sylvestris L.). The multi-layer approach has been evaluated against a series of simplified versions to determine whether the improved model complexity in canopy structure definition increases its predictive ability. Results show that a more complex structure (three height layers) should be preferable to simulate heterogeneous scenarios (Pedunculate oak stand), where heights distribution within the canopy justify the distinction in dominant, dominated and sub-dominated layers. On the contrary, it seems that using a multi-layer approach for more homogeneous stands (Scot pine stand) may be disadvantageous. Forcing the structure of an homogeneous stand to a multi-layer approach may in fact increase sources of uncertainty. On the other hand forcing complex forests to a mono layer simplified model, may cause an increase in mortality and a reduction in average DBH and Height. Compared with measured CO2 flux data, model results show good ability in estimating carbon sequestration trends, on both a monthly/seasonal and daily time scales. Moreover the model simulates quite well leaf phenology and the combined effects of the two different forest stands on CO2 fluxes.
NASA Astrophysics Data System (ADS)
Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.
2014-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
NASA Astrophysics Data System (ADS)
Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.
2013-12-01
Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
A multi-tissue type genome-scale metabolic network for analysis of whole-body systems physiology
2011-01-01
Background Genome-scale metabolic reconstructions provide a biologically meaningful mechanistic basis for the genotype-phenotype relationship. The global human metabolic network, termed Recon 1, has recently been reconstructed allowing the systems analysis of human metabolic physiology and pathology. Utilizing high-throughput data, Recon 1 has recently been tailored to different cells and tissues, including the liver, kidney, brain, and alveolar macrophage. These models have shown utility in the study of systems medicine. However, no integrated analysis between human tissues has been done. Results To describe tissue-specific functions, Recon 1 was tailored to describe metabolism in three human cells: adipocytes, hepatocytes, and myocytes. These cell-specific networks were manually curated and validated based on known cellular metabolic functions. To study intercellular interactions, a novel multi-tissue type modeling approach was developed to integrate the metabolic functions for the three cell types, and subsequently used to simulate known integrated metabolic cycles. In addition, the multi-tissue model was used to study diabetes: a pathology with systemic properties. High-throughput data was integrated with the network to determine differential metabolic activity between obese and type II obese gastric bypass patients in a whole-body context. Conclusion The multi-tissue type modeling approach presented provides a platform to study integrated metabolic states. As more cell and tissue-specific models are released, it is critical to develop a framework in which to study their interdependencies. PMID:22041191
Schlüter, Daniela K; Ramis-Conde, Ignacio; Chaplain, Mark A J
2015-02-06
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell-cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules.
Schlüter, Daniela K.; Ramis-Conde, Ignacio; Chaplain, Mark A. J.
2015-01-01
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell–cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules. PMID:25519994
Tracing Multi-Scale Climate Change at Low Latitude from Glacier Shrinkage
NASA Astrophysics Data System (ADS)
Moelg, T.; Cullen, N. J.; Hardy, D. R.; Kaser, G.
2009-12-01
Significant shrinkage of glaciers on top of Africa's highest mountain (Kilimanjaro, 5895 m a.s.l.) has been observed between the late 19th century and the present. Multi-year data from our automatic weather station on the largest remaining slope glacier at 5873 m allow us to force and verify a process-based distributed glacier mass balance model. This generates insights into energy and mass fluxes at the glacier-atmosphere interface, their feedbacks, and how they are linked to atmospheric conditions. By means of numerical atmospheric modeling and global climate model simulations, we explore the linkages of the local climate in Kilimanjaro's summit zone to larger-scale climate dynamics - which suggests a causal connection between Indian Ocean dynamics, mesoscale mountain circulation, and glacier mass balance. Based on this knowledge, the verified mass balance model is used for backward modeling of the steady-state glacier extent observed in the 19th century, which yields the characteristics of local climate change between that time and the present (30-45% less precipitation, 0.1-0.3 hPa less water vapor pressure, 2-4 percentage units less cloud cover at present). Our multi-scale approach provides an important contribution, from a cryospheric viewpoint, to the understanding of how large-scale climate change propagates to the tropical free troposphere. Ongoing work in this context targets the millennium-scale relation between large-scale climate and glacier behavior (by downscaling precipitation), and the possible effects of regional anthropogenic activities (land use change) on glacier mass balance.
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
Panepinto, D; Zanetti, M C
2018-03-01
This study proposes a multi-step approach to evaluating the environmental and economic aspects of a thermal treatment plant with an energy-recovery configuration. In order to validate the proposed approach, the Turin incineration plant was analyzed, and the potential of the incinerator and several different possible connections to the district heating network were then considered. Both local and global environmental balances were defined. The global-scale results provided information on carbon dioxide emissions, while the local-scale results were used as reference values for the implementation of a Gaussian model that could evaluate the actual concentrations of pollutants released into the atmosphere. The economic aspects were then analyzed, and a correspondence between the environmental and economic advantages defined. The results showed a high energy efficiency for the combined production of heat and electricity, and the opportunity to minimize environmental impacts by including cogeneration in a district heating scheme. This scheme showed an environmental advantage, whereas the electricity-only configuration showed an economic advantage. A change in the thermal energy price (specifically, to 40 €/MWh), however, would make it possible to obtain both environmental and economic advantages. Copyright © 2017 Elsevier Ltd. All rights reserved.
The CLUVA project: Climate-change scenarios and their impact on urban areas in Africa
NASA Astrophysics Data System (ADS)
Di Ruocco, Angela; Weets, Guy; Gasparini, Paolo; Jørgensen, Gertrud; Lindley, Sarah; Pauleit, Stephan; Vahed, Anwar; Schiano, Pasquale; Kabisch, Sigrun; Vedeld, Trond; Coly, Adrien; Tonye, Emmanuel; Touré, Hamidou; Kombe, Wilbard; Yeshitela, Kumelachew
2013-04-01
CLUVA (CLimate change and Urban Vulnerability in Africa; http://www.cluva.eu/) is a 3 years project, funded by the European Commission in 2010. Its main objective is the estimate of the impacts of climate changes in the next 40 years at urban scale in Africa. The mission of CLUVA is to develop methods and knowledge to assess risks cascading from climate-changes. It downscales IPCC climate projections to evaluate threats to selected African test cities; mainly floods, sea-level rise, droughts, heat waves and desertification. The project evaluates and links: social vulnerability; vulnerability of in-town ecosystems and urban-rural interfaces; vulnerability of urban built environment and lifelines; and related institutional and governance dimensions of adaptation. A multi-scale and multi-disciplinary quantitative, probabilistic, modelling is applied. CLUVA brings together climate experts, risk management experts, urban planners and social scientists with their African counterparts in an integrated research effort focusing on the improvement of the capacity of scientific institutions, local councils and civil society to cope with climate change. The CLUVA approach was set-up in the first year of the project and developed as follows: an ensemble of eight global projections of climate changes is produced for east and west Africa until 2050 considering the new IPCC (International Panel on Climate Changes; http://www.ipcc.ch/) scenarios. These are then downscaled to urban level, where territorial modeling is required to compute hazard effects on the vulnerable physical system (urban ecosystems, informal settlements, lifelines such as transportation and sewer networks) as well as on the social context, in defined time frames, and risk analysis is then employed to assess expected consequences. An investigation of the existing urban planning and governance systems and its interface with climate risks is performed. With the aid of the African partners, the developed approach is currently being applied to selected African case studies: Addis Ababa - Ethiopia; Dar es Salaam - Tanzania, Douala - Cameroun; Ouagadougou - Burkina Faso, St. Louis - Senegal. The poster will illustrate the CLUVA's framework to assess climate-change-related risks at an urban scale in Africa, and will report on the progresses of selected case studies to demonstrate feasibility of a multi-scale and multi-risk quantitative approach for risk management.
Biocultural approaches to well-being and sustainability indicators across scales.
Sterling, Eleanor J; Filardi, Christopher; Toomey, Anne; Sigouin, Amanda; Betley, Erin; Gazit, Nadav; Newell, Jennifer; Albert, Simon; Alvira, Diana; Bergamini, Nadia; Blair, Mary; Boseto, David; Burrows, Kate; Bynum, Nora; Caillon, Sophie; Caselle, Jennifer E; Claudet, Joachim; Cullman, Georgina; Dacks, Rachel; Eyzaguirre, Pablo B; Gray, Steven; Herrera, James; Kenilorea, Peter; Kinney, Kealohanuiopuna; Kurashima, Natalie; Macey, Suzanne; Malone, Cynthia; Mauli, Senoveva; McCarter, Joe; McMillen, Heather; Pascua, Pua'ala; Pikacha, Patrick; Porzecanski, Ana L; de Robert, Pascale; Salpeteur, Matthieu; Sirikolo, Myknee; Stege, Mark H; Stege, Kristina; Ticktin, Tamara; Vave, Ron; Wali, Alaka; West, Paige; Winter, Kawika B; Jupiter, Stacy D
2017-12-01
Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and scientists. Sets of indicators that capture both ecological and social-cultural factors, and the feedbacks between them, can underpin cross-scale linkages that help bridge local and global scale initiatives to increase resilience of both humans and ecosystems. Here we argue that biocultural approaches, in combination with methods for synthesizing across evidence from multiple sources, are critical to developing metrics that facilitate linkages across scales and dimensions. Biocultural approaches explicitly start with and build on local cultural perspectives - encompassing values, knowledges, and needs - and recognize feedbacks between ecosystems and human well-being. Adoption of these approaches can encourage exchange between local and global actors, and facilitate identification of crucial problems and solutions that are missing from many regional and international framings of sustainability. Resource managers, scientists, and policymakers need to be thoughtful about not only what kinds of indicators are measured, but also how indicators are designed, implemented, measured, and ultimately combined to evaluate resource use and well-being. We conclude by providing suggestions for translating between local and global indicator efforts.
Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal
NASA Astrophysics Data System (ADS)
Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana
2014-05-01
The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling the properties of a long-range correlated process.
Multi-scale Visualization of Molecular Architecture Using Real-Time Ambient Occlusion in Sculptor.
Wahle, Manuel; Wriggers, Willy
2015-10-01
The modeling of large biomolecular assemblies relies on an efficient rendering of their hierarchical architecture across a wide range of spatial level of detail. We describe a paradigm shift currently under way in computer graphics towards the use of more realistic global illumination models, and we apply the so-called ambient occlusion approach to our open-source multi-scale modeling program, Sculptor. While there are many other higher quality global illumination approaches going all the way up to full GPU-accelerated ray tracing, they do not provide size-specificity of the features they shade. Ambient occlusion is an aspect of global lighting that offers great visual benefits and powerful user customization. By estimating how other molecular shape features affect the reception of light at some surface point, it effectively simulates indirect shadowing. This effect occurs between molecular surfaces that are close to each other, or in pockets such as protein or ligand binding sites. By adding ambient occlusion, large macromolecular systems look much more natural, and the perception of characteristic surface features is strongly enhanced. In this work, we present a real-time implementation of screen space ambient occlusion that delivers realistic cues about tunable spatial scale characteristics of macromolecular architecture. Heretofore, the visualization of large biomolecular systems, comprising e.g. hundreds of thousands of atoms or Mega-Dalton size electron microscopy maps, did not take into account the length scales of interest or the spatial resolution of the data. Our approach has been uniquely customized with shading that is tuned for pockets and cavities of a user-defined size, making it useful for visualizing molecular features at multiple scales of interest. This is a feature that none of the conventional ambient occlusion approaches provide. Actual Sculptor screen shots illustrate how our implementation supports the size-dependent rendering of molecular surface features.
Embodied Anticipation: A Neurodevelopmental Interpretation
ERIC Educational Resources Information Center
Kinsbourne, Marcel; Jordan, J. Scott
2009-01-01
This article proposes an approach to the brain's role in communication that treats the brain as the vehicle of a multi-scale embodiment of anticipation. Instead of conceptualizing anticipation as something a brain is able to do when circumstances seem to require it, this study proposes that anticipation is continuous and ongoing because to…
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
NASA Astrophysics Data System (ADS)
Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak
Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
2012-01-20
ultrasonic Lamb waves to plastic strain and fatigue life. Theory was developed and validated to predict second harmonic generation for specific mode... Fatigue and damage generation and progression are processes consisting of a series of interrelated events that span large scales of space and time...strain and fatigue life A set of experiments were completed that worked to relate the acoustic nonlinearity measured with Lamb waves to both the
NASA Astrophysics Data System (ADS)
Walther, Christian; Frei, Michaela
2017-04-01
Mining of so-called "conflict minerals" is often related with small-scale mining activities. The here discussed activities are located in forested areas in the eastern DRC, which are often remote, difficult to access and insecure for traditional geological field inspection. In order to accelerate their CTC (Certified Trading Chain)-certification process, remote sensing data are used for detection and monitoring of these small-scale mining operations. This requires a high image acquisition frequency due to mining site relocations and for compensation of year-round high cloud coverage, especially for optical data evaluation. Freely available medium resolution optical data of Sentinel-2 and Landsat-8 as well as SAR data of Sentinel-1 are used for detecting small mining targets with a minimum size of approximately 0.5 km2. The developed method enables a robust multi-temporal detection of mining sites, monitoring of mining site spatio-temporal relocations and environmental changes. Since qualitative and quantitative comparable results are generated, the followed change detection approach is objective and transparent and may push the certification process forward.
A new multi-scale method to reveal hierarchical modular structures in biological networks.
Jiao, Qing-Ju; Huang, Yan; Shen, Hong-Bin
2016-11-15
Biological networks are effective tools for studying molecular interactions. Modular structure, in which genes or proteins may tend to be associated with functional modules or protein complexes, is a remarkable feature of biological networks. Mining modular structure from biological networks enables us to focus on a set of potentially important nodes, which provides a reliable guide to future biological experiments. The first fundamental challenge in mining modular structure from biological networks is that the quality of the observed network data is usually low owing to noise and incompleteness in the obtained networks. The second problem that poses a challenge to existing approaches to the mining of modular structure is that the organization of both functional modules and protein complexes in networks is far more complicated than was ever thought. For instance, the sizes of different modules vary considerably from each other and they often form multi-scale hierarchical structures. To solve these problems, we propose a new multi-scale protocol for mining modular structure (named ISIMB) driven by a node similarity metric, which works in an iteratively converged space to reduce the effects of the low data quality of the observed network data. The multi-scale node similarity metric couples both the local and the global topology of the network with a resolution regulator. By varying this resolution regulator to give different weightings to the local and global terms in the metric, the ISIMB method is able to fit the shape of modules and to detect them on different scales. Experiments on protein-protein interaction and genetic interaction networks show that our method can not only mine functional modules and protein complexes successfully, but can also predict functional modules from specific to general and reveal the hierarchical organization of protein complexes.
Zhuang, Kai H; Herrgård, Markus J
2015-09-01
In recent years, bio-based chemicals have gained traction as a sustainable alternative to petrochemicals. However, despite rapid advances in metabolic engineering and synthetic biology, there remain significant economic and environmental challenges. In order to maximize the impact of research investment in a new bio-based chemical industry, there is a need for assessing the technological, economic, and environmental potentials of combinations of biomass feedstocks, biochemical products, bioprocess technologies, and metabolic engineering approaches in the early phase of development of cell factories. To address this issue, we have developed a comprehensive Multi-scale framework for modeling Sustainable Industrial Chemicals production (MuSIC), which integrates modeling approaches for cellular metabolism, bioreactor design, upstream/downstream processes and economic impact assessment. We demonstrate the use of the MuSIC framework in a case study where two major polymer precursors (1,3-propanediol and 3-hydroxypropionic acid) are produced from two biomass feedstocks (corn-based glucose and soy-based glycerol) through 66 proposed biosynthetic pathways in two host organisms (Escherichia coli and Saccharomyces cerevisiae). The MuSIC framework allows exploration of tradeoffs and interactions between economy-scale objectives (e.g. profit maximization, emission minimization), constraints (e.g. land-use constraints) and process- and cell-scale technology choices (e.g. strain design or oxygenation conditions). We demonstrate that economy-scale assessment can be used to guide specific strain design decisions in metabolic engineering, and that these design decisions can be affected by non-intuitive dependencies across multiple scales. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Jeanne C. Chambers; David A. Pyke; Jeremy D. Maestas; Mike Pellant; Chad S. Boyd; Steven B. Campbell; Shawn Espinosa; Douglas W. Havlina; Kenneth E. Mayer; Amarina Wuenschel
2014-01-01
This Report provides a strategic approach for conservation of sagebrush ecosystems and Greater Sage- Grouse (sage-grouse) that focuses specifically on habitat threats caused by invasive annual grasses and altered fire regimes. It uses information on factors that influence (1) sagebrush ecosystem resilience to disturbance and resistance to invasive annual grasses and (2...
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, Sebastian; Marquetand, Philipp; González, Leticia
2014-08-21
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less
NASA Astrophysics Data System (ADS)
Harfst, S.; Portegies Zwart, S.; McMillan, S.
2008-12-01
We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population settings. PMID:25285151
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten
NASA Astrophysics Data System (ADS)
Samin, Adib J.; Zhang, Jinsuo
2016-07-01
In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlo simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
NASA Astrophysics Data System (ADS)
Harvey, J. W.; Packman, A. I.
2010-12-01
Surface water and groundwater flow interact with the channel geomorphology and sediments in ways that determine how material is transported, stored, and transformed in stream corridors. Solute and sediment transport affect important ecological processes such as carbon and nutrient dynamics and stream metabolism, processes that are fundamental to stream health and function. Many individual mechanisms of transport and storage of solute and sediment have been studied, including surface water exchange between the main channel and side pools, hyporheic flow through shallow and deep subsurface flow paths, and sediment transport during both baseflow and floods. A significant challenge arises from non-linear and scale-dependent transport resulting from natural, fractal fluvial topography and associated broad, multi-scale hydrologic interactions. Connections between processes and linkages across scales are not well understood, imposing significant limitations on system predictability. The whole-stream tracer experimental approach is popular because of the spatial averaging of heterogeneous processes; however the tracer results, implemented alone and analyzed using typical models, cannot usually predict transport beyond the very specific conditions of the experiment. Furthermore, the results of whole stream tracer experiments tend to be biased due to unavoidable limitations associated with sampling frequency, measurement sensitivity, and experiment duration. We recommend that whole-stream tracer additions be augmented with hydraulic and topographic measurements and also with additional tracer measurements made directly in storage zones. We present examples of measurements that encompass interactions across spatial and temporal scales and models that are transferable to a wide range of flow and geomorphic conditions. These results show how the competitive effects between the different forces driving hyporheic flow, operating at different spatial scales, creates a situation where hyporheic fluxes cannot be accurately estimated without considering multi-scale effects. Our modeling captures the dominance of small-scale features such as bedforms that drive the majority of hyporheic flow, but it also captures how hyporheic flow is substantially modified by relatively small changes in streamflow or groundwater flow. The additional field measurements add sensitivity and power to whole stream tracer additions by improving resolution of the relative importance of storage at different scales (e.g. bar-scale versus bedform-scale). This information is critical in identifying hot spots where important biogeochemical reactions occur. In summary, interpreting multi-scale interactions in streams requires models that are physically based and that incorporate non-linear process dynamics. Such models can take advantage of increasingly comprehensive field data to integrate transport processes across spatially variable flow and geomorphic conditions. The most useful field and modeling approaches will be those that are simple enough to be easily implemented by users from various disciplines but comprehensive enough to produce meaningful predictions for a wide range of flow and geomorphic scenarios. This capability is needed to support improved strategies for protecting stream ecological health in the face of accelerating land use and climate change.
Volis, Sergei; Ormanbekova, Danara; Yermekbayev, Kanat; Song, Minshu; Shulgina, Irina
2015-01-01
Detecting local adaptation and its spatial scale is one of the most important questions of evolutionary biology. However, recognition of the effect of local selection can be challenging when there is considerable environmental variation across the distance at the whole species range. We analyzed patterns of local adaptation in emmer wheat, Triticum dicoccoides, at two spatial scales, small (inter-population distance less than one km) and large (inter-population distance more than 50 km) using several approaches. Plants originating from four distinct habitats at two geographic scales (cold edge, arid edge and two topographically dissimilar core locations) were reciprocally transplanted and their success over time was measured as 1) lifetime fitness in a year of planting, and 2) population growth four years after planting. In addition, we analyzed molecular (SSR) and quantitative trait variation and calculated the QST/FST ratio. No home advantage was detected at the small spatial scale. At the large spatial scale, home advantage was detected for the core population and the cold edge population in the year of introduction via measuring life-time plant performance. However, superior performance of the arid edge population in its own environment was evident only after several generations via measuring experimental population growth rate through genotyping with SSRs allowing counting the number of plants and seeds per introduced genotype per site. These results highlight the importance of multi-generation surveys of population growth rate in local adaptation testing. Despite predominant self-fertilization of T. dicoccoides and the associated high degree of structuring of genetic variation, the results of the QST - FST comparison were in general agreement with the pattern of local adaptation at the two spatial scales detected by reciprocal transplanting.
Correlates of a Single-Item Indicator Versus a Multi-Item Scale of Outness About Same-Sex Attraction
Noor, Syed W.; Galos, Dylan L.; Simon Rosser, B. R.
2017-01-01
In this study, we investigated if a single-item indicator measured the degree to which people were open about their same-sex attraction (“out”) as accurately as a multi-item scale. For the multi-item scale, we used the Outness Inventory, which includes three subscales: family, world, and religion. We examined correlations between the single- and multi-item measures; between the single-item indicator and the subscales of the multi-item scale; and between the measures and internalized homonegativity, social attitudes towards homosexuality, and depressive symptoms. In addition, we calculated Tjur’s R2 as a measure of predictive power of the single-item indicator, multi-item scale, and subscales of the multi-item scale in predicting two health-related outcomes: depressive symptoms and condomless anal sex with multiple partners. There was a strong correlation between the single- and multi-item measures (r = 0.73). Furthermore, there were strong correlations between the single-item indicator and each subscale of the multi-item scale: family (r = 0.70), world (r = 0.77), and religion (r = 0.50). In addition, the correlations between the single-item indicator and internalized homonegativity (r = −0.63), social attitudes towards homosexuality (r = −0.38), and depression (r = −0.14) were higher than those between the multi-item scale and internalized homonegativity (r = −0.55), social attitudes towards homosexuality (r = −0.21), and depression (r = −0.13). Contrary to the premise that multi-item measures are superior to single-item measures, our collective findings indicate that the single-item indicator of outness performs better than the multi-item scale of outness. PMID:26292840
NASA Astrophysics Data System (ADS)
Choi, H.; Kim, S.
2012-12-01
Most of hydrologic models have generally been used to describe and represent the spatio-temporal variability of hydrological processes in the watershed scale. Though it is an obvious fact that hydrological responses have the time varying nature, optimal values of model parameters were normally considered as time invariants or constants in most cases. The recent paper of Choi and Beven (2007) presents a multi-period and multi-criteria model conditioning approach. The approach is based on the equifinality thesis within the Generalised Likelihood Uncertainty Estimation (GLUE) framework. In their application, the behavioural TOPMODEL parameter sets are determined by several performance measures for global (annual) and short (30-days) periods, clustered using a Fuzzy C-means algorithm, into 15 types representing different hydrological conditions. Their study shows a good performance on the calibration of a rainfall-runoff model in a forest catchment, and also gives strong indications that it is uncommon to find model realizations that were behavioural over all multi-periods and all performance measures, and multi-period model conditioning approach may become new effective tool for predictions of hydrological processes in ungauged catchments. This study is a follow-up study on the Choi and Beven's (2007) model conditioning approach to test how the approach is effective for the prediction of rainfall-runoff responses in ungauged catchments. To achieve this purpose, 6 small forest catchments are selected among the several hydrological experimental catchments operated by Korea Forest Research Institute. In each catchment, long-term hydrological time series data varying from 10 to 30 years were available. The areas of the selected catchments range from 13.6 to 37.8 ha, and all areas are covered by coniferous or broad-leaves forests. The selected catchments locate in the southern coastal area to the northern part of South Korea. The bed rocks are Granite gneiss, Granite or Limestone. The study is progressed based on the followings. Firstly, hydrological time series of each catchment are sampled and clustered into multi-period having distinctly different temporal characteristics, and secondly, behavioural parameter distributions are determined in each multi-period based on the specification of multi-criteria model performance measures. Finally, behavioural parameter sets of each multi-period of single catchment are applied on the corresponding period of other catchments, and the cross-validations are conducted in this manner for all catchments The multi-period model conditioning approach is clearly effective to reduce the width of prediction limits, giving better model performance against the temporal variability of hydrological characteristics, and has enough potential to be the effective prediction tool for ungauged catchments. However, more advanced and continuous studies are needed to expand the application of this approach in prediction of hydrological responses in ungauged catchments,
Portet, Anaïs; Pinaud, Silvain; Tetreau, Guillaume; Galinier, Richard; Cosseau, Céline; Duval, David; Grunau, Christoph; Mitta, Guillaume; Gourbal, Benjamin
2017-10-01
The fresh water snail Biomphalaria glabrata is one of the vectors of the trematode pathogen Schistosoma mansoni, which is one of the agents responsible of human schistosomiasis. In this host-parasite interaction, co-evolutionary dynamic results into an infectivity mosaic known as compatibility polymorphism. Integrative approaches including large scale molecular approaches have been conducted in recent years to improve our understanding of the mechanisms underlying compatibility. This review presents the combination of integrated Multi-Omic approaches leading to the discovery of two repertoires of polymorphic and/or diversified interacting molecules: the parasite antigens S. mansoni polymorphic mucins (SmPoMucs) and the B. glabrata immune receptors fibrinogen-related proteins (FREPs). We argue that their interactions may be major components for defining the compatible/incompatible status of a specific snail/schistosome combination. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao
2018-04-01
Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Multi-scale curvature for automated identification of glaciated mountain landscapes
NASA Astrophysics Data System (ADS)
Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar
2014-03-01
Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes.
Quality and Utility of the Multi-Tiered Instruction Self-Efficacy Scale
ERIC Educational Resources Information Center
Barnes, Susan K.; Burchard, Melinda S.
2011-01-01
Response to Intervention (RTI) is an educational approach that integrates ongoing assessment of individual student progress with targeted instruction. Administrators and teachers in P-12 schools expressed a need for colleagues in higher education to provide training to general education pre-service and in-service teachers in selecting appropriate…
The U.S. EPA National Health and Environmental Effects Research Laboratory's (NHEERL) Wildlife Research Strategy was developed to provide methods, models and data to address concerns related to toxic chemicals and habitat alteration in the context of wildlife risk assessment and ...
Introduction to Multilevel Item Response Theory Analysis: Descriptive and Explanatory Models
ERIC Educational Resources Information Center
Sulis, Isabella; Toland, Michael D.
2017-01-01
Item response theory (IRT) models are the main psychometric approach for the development, evaluation, and refinement of multi-item instruments and scaling of latent traits, whereas multilevel models are the primary statistical method when considering the dependence between person responses when primary units (e.g., students) are nested within…
ERIC Educational Resources Information Center
Kappler, Ulrike; Rowland, Susan L.; Pedwell, Rhianna K.
2017-01-01
Systems biology is frequently taught with an emphasis on mathematical modeling approaches. This focus effectively excludes most biology, biochemistry, and molecular biology students, who are not mathematics majors. The mathematical focus can also present a misleading picture of systems biology, which is a multi-disciplinary pursuit requiring…
Like most air quality modeling systems, CMAQ divides the treatment of meteorological and chemical/transport processes into separate models run sequentially. A potential drawback to this approach is that it creates the illusion that these processes are minimally interdependent an...
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
NASA Astrophysics Data System (ADS)
Peigney, B. E.; Larroche, O.; Tikhonchuk, V.
2014-12-01
In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effects on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peigney, B. E.; Larroche, O.; Tikhonchuk, V.
2014-12-15
In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effectsmore » on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.« less
An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2) d(-1) and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2) d(-1) and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m-2 d-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m-2 d-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution. PMID:27088356
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Orion Flight Test Architecture Benefits of MBSE Approach
NASA Technical Reports Server (NTRS)
Reed, Don; Simpson, Kim
2012-01-01
Exploration Flight Test 1 (EFT-1) is an unmanned first orbital flight test of the Multi Purpose Crew Vehicle (MPCV) Mission s purpose is to: Test Orion s ascent, on-orbit and entry capabilities Monitor critical activities Provide ground control in support of contingency scenarios Requires development of a large scale end-to-end information system network architecture To effectively communicate the scope of the end-to-end system a model-based system engineering approach was chosen.
Epitrochoid Power-Law Nozzle Rapid Prototype Build/Test Project (Briefing Charts)
2015-02-01
Production Approved for public release; distribution is unlimited. PA clearance # 15122. 4 Epitrochoid Power-Law Nozzle Build/Test Build on SpaceX ...Multiengine Approach SpaceX ) Approved for public release; distribution is unlimited. PA clearance # 15122. Engines: Merlin 1D on Falcon 9 v1.1 (Photo 5...to utilize features of high performance engines advances and the economies of scale of the multi-engine approach of SpaceX Falcon 9 – Rapid Prototype
Gorsevski, Pece V; Donevska, Katerina R; Mitrovski, Cvetko D; Frizado, Joseph P
2012-02-01
This paper presents a GIS-based multi-criteria decision analysis approach for evaluating the suitability for landfill site selection in the Polog Region, Macedonia. The multi-criteria decision framework considers environmental and economic factors which are standardized by fuzzy membership functions and combined by integration of analytical hierarchy process (AHP) and ordered weighted average (OWA) techniques. The AHP is used for the elicitation of attribute weights while the OWA operator function is used to generate a wide range of decision alternatives for addressing uncertainty associated with interaction between multiple criteria. The usefulness of the approach is illustrated by different OWA scenarios that report landfill suitability on a scale between 0 and 1. The OWA scenarios are intended to quantify the level of risk taking (i.e., optimistic, pessimistic, and neutral) and to facilitate a better understanding of patterns that emerge from decision alternatives involved in the decision making process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ashraf, Muhammad Tahir; Schmidt, Jens Ejbye
2018-02-01
Biorefinery based on multi-feedstock lignocellulose can be viable where a sustainable supply of a single substrate is limited, for example in arid regions. Processing of mixed feedstocks has been studied in lab scale, however, its economics are less studied. In this study, an economic comparison was made between separate and combined (mixed) processing approaches for multi-feedstock lignocellulose for the production of monomeric sugars. This modular approach of focusing on sugar platform makes the results applicable for many applications using the sugars as feedstock. Feedstock considered in this study were the green and woody lignocellulose residues: Bermuda grass, Jasmine hedges, and date palm fronds. Results showed that, at an identical total feed rate, combined processing was more advantageous as compared to separate processing. A further sensitivity analysis on mixed combined processing showed that the cellulase enzyme price and feed price are the two major factors affecting the production cost. Copyright © 2017 Elsevier Ltd. All rights reserved.
A multi-objective approach to improve SWAT model calibration in alpine catchments
NASA Astrophysics Data System (ADS)
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
On the cooperativity of association and reference energy scales in thermodynamic perturbation theory
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.
2016-11-01
Equations of state for hydrogen bonding fluids are typically described by two energy scales. A short range highly directional hydrogen bonding energy scale as well as a reference energy scale which accounts for dispersion and orientationally averaged multi-pole attractions. These energy scales are always treated independently. In recent years, extensive first principles quantum mechanics calculations on small water clusters have shown that both hydrogen bond and reference energy scales depend on the number of incident hydrogen bonds of the water molecule. In this work, we propose a new methodology to couple the reference energy scale to the degree of hydrogen bonding in the fluid. We demonstrate the utility of the new approach by showing that it gives improved predictions of water-hydrocarbon mutual solubilities.
NASA Astrophysics Data System (ADS)
Cowles, G. W.; Hakim, A.; Churchill, J. H.
2016-02-01
Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico
2014-01-01
Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co-expressed heat shock protein (Hsp) and cardiomyopathy genes (Bag3, Cryab, Kras, Emd, Plec), which was significantly replicated using separate failing heart and liver gene expression datasets in humans, thus revealing a conserved functional role for Hsp genes in cardiovascular disease.
Effective surface and boundary conditions for heterogeneous surfaces with mixed boundary conditions
NASA Astrophysics Data System (ADS)
Guo, Jianwei; Veran-Tissoires, Stéphanie; Quintard, Michel
2016-01-01
To deal with multi-scale problems involving transport from a heterogeneous and rough surface characterized by a mixed boundary condition, an effective surface theory is developed, which replaces the original surface by a homogeneous and smooth surface with specific boundary conditions. A typical example corresponds to a laminar flow over a soluble salt medium which contains insoluble material. To develop the concept of effective surface, a multi-domain decomposition approach is applied. In this framework, velocity and concentration at micro-scale are estimated with an asymptotic expansion of deviation terms with respect to macro-scale velocity and concentration fields. Closure problems for the deviations are obtained and used to define the effective surface position and the related boundary conditions. The evolution of some effective properties and the impact of surface geometry, Péclet, Schmidt and Damköhler numbers are investigated. Finally, comparisons are made between the numerical results obtained with the effective models and those from direct numerical simulations with the original rough surface, for two kinds of configurations.
Hengsbach, Stefan; Lantada, Andrés Díaz
2014-08-01
The possibility of designing and manufacturing biomedical microdevices with multiple length-scale geometries can help to promote special interactions both with their environment and with surrounding biological systems. These interactions aim to enhance biocompatibility and overall performance by using biomimetic approaches. In this paper, we present a design and manufacturing procedure for obtaining multi-scale biomedical microsystems based on the combination of two additive manufacturing processes: a conventional laser writer to manufacture the overall device structure, and a direct-laser writer based on two-photon polymerization to yield finer details. The process excels for its versatility, accuracy and manufacturing speed and allows for the manufacture of microsystems and implants with overall sizes up to several millimeters and with details down to sub-micrometric structures. As an application example we have focused on manufacturing a biomedical microsystem to analyze the impact of microtextured surfaces on cell motility. This process yielded a relevant increase in precision and manufacturing speed when compared with more conventional rapid prototyping procedures.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Summarizing the evidence on the international trade in illegal wildlife.
Rosen, Gail Emilia; Smith, Katherine F
2010-08-01
The global trade in illegal wildlife is a multi-billion dollar industry that threatens biodiversity and acts as a potential avenue for invasive species and disease spread. Despite the broad-sweeping implications of illegal wildlife sales, scientists have yet to describe the scope and scale of the trade. Here, we provide the most thorough and current description of the illegal wildlife trade using 12 years of seizure records compiled by TRAFFIC, the wildlife trade monitoring network. These records comprise 967 seizures including massive quantities of ivory, tiger skins, live reptiles, and other endangered wildlife and wildlife products. Most seizures originate in Southeast Asia, a recently identified hotspot for future emerging infectious diseases. To date, regulation and enforcement have been insufficient to effectively control the global trade in illegal wildlife at national and international scales. Effective control will require a multi-pronged approach including community-scale education and empowering local people to value wildlife, coordinated international regulation, and a greater allocation of national resources to on-the-ground enforcement.
Automatic co-registration of 3D multi-sensor point clouds
NASA Astrophysics Data System (ADS)
Persad, Ravi Ancil; Armenakis, Costas
2017-08-01
We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.
Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2012-01-01
This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.
Precipitation and Phase Transformations in 2101 Lean Duplex Stainless Steel During Isothermal Aging
NASA Astrophysics Data System (ADS)
Maetz, Jean-Yves; Cazottes, Sophie; Verdu, Catherine; Kleber, Xavier
2016-01-01
The effect of isothermal aging at 963 K (690 °C) on the microstructure of a 2101 lean duplex stainless steel, with the composition Fe-21.5Cr-5Mn-1.6Ni-0.22N-0.3Mo, was investigated using a multi-technique and multi-scale approach. The kinetics of phase transformation and precipitation was followed from a few minutes to thousands of hours using thermoelectric power measurements; based on these results, certain aging states were selected for electron microscopy characterization. Scanning electron microscopy, electron back-scattered diffraction, and transmission electron microscopy were used to quantitatively describe the microstructural evolution through crystallographic analysis, chemical analysis, and volume fraction measurements from the macroscopic scale down to the nanometric scale. During aging, the precipitation of M23C6 carbides, Cr2N nitrides, and σ phase as well as the transformation of ferrite into austenite and austenite into martensite was observed. These complex microstructural changes are controlled by Cr volume diffusion. The precipitation and phase transformation mechanisms are described.
NASA Astrophysics Data System (ADS)
Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin
2018-02-01
This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.
Beyond Darcy's law: The role of phase topology and ganglion dynamics for two-fluid flow
Armstrong, Ryan T.; McClure, James E.; Berrill, Mark A.; ...
2016-10-27
Relative permeability quantifies the ease at which immiscible phases flow through porous rock and is one of the most well known constitutive relationships for petroleum engineers. It however exhibits troubling dependencies on experimental conditions and is not a unique function of phase saturation as commonly accepted in industry practices. The problem lies in the multi-scale nature of the problem where underlying disequilibrium processes create anomalous macroscopic behavior. Here we show that relative permeability rate dependencies are explained by ganglion dynamic flow. We utilize fast X-ray micro-tomography and pore-scale simulations to identify unique flow regimes during the fractional flow of immisciblemore » phases and quantify the contribution of ganglion flux to the overall flux of non-wetting phase. We anticipate our approach to be the starting point for the development of sophisticated multi-scale flow models that directly link pore-scale parameters to macro-scale behavior. Such models will have a major impact on how we recover hydrocarbons from the subsurface, store sequestered CO 2 in geological formations, and remove non-aqueous environmental hazards from the vadose zone.« less
The Biot coefficient for a low permeability heterogeneous limestone
NASA Astrophysics Data System (ADS)
Selvadurai, A. P. S.
2018-04-01
This paper presents the experimental and theoretical developments used to estimate the Biot coefficient for the heterogeneous Cobourg Limestone, which is characterized by its very low permeability. The coefficient forms an important component of the Biot poroelastic model that is used to examine coupled hydro-mechanical and thermo-hydro-mechanical processes in the fluid-saturated Cobourg Limestone. The constraints imposed by both the heterogeneous fabric and its extremely low intact permeability [K \\in (10^{-23},10^{-20}) m2 ] require the development of alternative approaches to estimate the Biot coefficient. Large specimen bench-scale triaxial tests (150 mm diameter and 300 mm long) that account for the scale of the heterogeneous fabric are complemented by results for the volume fraction-based mineralogical composition derived from XRD measurements. The compressibility of the solid phase is based on theoretical developments proposed in the mechanics of multi-phasic elastic materials. An appeal to the theory of multi-phasic elastic solids is the only feasible approach for examining the compressibility of the solid phase. The presence of a number of mineral species necessitates the use of the theories of Voigt, Reuss and Hill along with the theories proposed by Hashin and Shtrikman for developing bounds for the compressibility of the multi-phasic geologic material composing the skeletal fabric. The analytical estimates for the Biot coefficient for the Cobourg Limestone are compared with results for similar low permeability rocks reported in the literature.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
NASA Astrophysics Data System (ADS)
Dai, Xiaoyu; Haussener, Sophia
2018-02-01
A multi-scale methodology for the radiative transfer analysis of heterogeneous media composed of morphologically-complex components on two distinct scales is presented. The methodology incorporates the exact morphology at the various scales and utilizes volume-averaging approaches with the corresponding effective properties to couple the scales. At the continuum level, the volume-averaged coupled radiative transfer equations are solved utilizing (i) effective radiative transport properties obtained by direct Monte Carlo simulations at the pore level, and (ii) averaged bulk material properties obtained at particle level by Lorenz-Mie theory or discrete dipole approximation calculations. This model is applied to a soot-contaminated snow layer, and is experimentally validated with reflectance measurements of such layers. A quantitative and decoupled understanding of the morphological effect on the radiative transport is achieved, and a significant influence of the dual-scale morphology on the macroscopic optical behavior is observed. Our results show that with a small amount of soot particles, of the order of 1ppb in volume fraction, the reduction in reflectance of a snow layer with large ice grains can reach up to 77% (at a wavelength of 0.3 μm). Soot impurities modeled as compact agglomerates yield 2-3% lower reduction of the reflectance in a thick show layer compared to snow with soot impurities modeled as chain-like agglomerates. Soot impurities modeled as equivalent spherical particles underestimate the reflectance reduction by 2-8%. This study implies that the morphology of the heterogeneities in a media significantly affects the macroscopic optical behavior and, specifically for the soot-contaminated snow, indicates the non-negligible role of soot on the absorption behavior of snow layers. It can be equally used in technical applications for the assessment and optimization of optical performance in multi-scale media.
Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David
2013-01-01
The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.
Lee, Barrett A.; Reardon, Sean F.; Firebaugh, Glenn; Farrell, Chad R.; Matthews, Stephen A.; O'Sullivan, David
2014-01-01
The census tract-based residential segregation literature rests on problematic assumptions about geographic scale and proximity. We pursue a new tract-free approach that combines explicitly spatial concepts and methods to examine racial segregation across egocentric local environments of varying size. Using 2000 census data for the 100 largest U.S. metropolitan areas, we compute a spatially modified version of the information theory index H to describe patterns of black-white, Hispanic-white, Asian-white, and multi-group segregation at different scales. The metropolitan structural characteristics that best distinguish micro-segregation from macro-segregation for each group combination are identified, and their effects are decomposed into portions due to racial variation occurring over short and long distances. A comparison of our results to those from tract-based analyses confirms the value of the new approach. PMID:25324575
NASA Astrophysics Data System (ADS)
Zhao, An; Jin, Ning-de; Ren, Ying-yu; Zhu, Lei; Yang, Xia
2016-01-01
In this article we apply an approach to identify the oil-gas-water three-phase flow patterns in vertical upwards 20 mm inner-diameter pipe based on the conductance fluctuating signals. We use the approach to analyse the signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and extracting their scaling properties. We find that the magnitude series relates to nonlinear properties of the original time series, whereas the sign series relates to the linear properties. The research shows that the oil-gas-water three-phase flows (slug flow, churn flow, bubble flow) can be classified by a combination of scaling exponents of magnitude and sign series. This study provides a new way of characterising linear and nonlinear properties embedded in oil-gas-water three-phase flows.
Wafer integrated micro-scale concentrating photovoltaics
NASA Astrophysics Data System (ADS)
Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun
2017-09-01
Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems.
Foundations of modelling of nonequilibrium low-temperature plasmas
NASA Astrophysics Data System (ADS)
Alves, L. L.; Bogaerts, A.; Guerra, V.; Turner, M. M.
2018-02-01
This work explains the need for plasma models, introduces arguments for choosing the type of model that better fits the purpose of each study, and presents the basics of the most common nonequilibrium low-temperature plasma models and the information available from each one, along with an extensive list of references for complementary in-depth reading. The paper presents the following models, organised according to the level of multi-dimensional description of the plasma: kinetic models, based on either a statistical particle-in-cell/Monte-Carlo approach or the solution to the Boltzmann equation (in the latter case, special focus is given to the description of the electron kinetics); multi-fluid models, based on the solution to the hydrodynamic equations; global (spatially-average) models, based on the solution to the particle and energy rate-balance equations for the main plasma species, usually including a very complete reaction chemistry; mesoscopic models for plasma-surface interaction, adopting either a deterministic approach or a stochastic dynamical Monte-Carlo approach. For each plasma model, the paper puts forward the physics context, introduces the fundamental equations, presents advantages and limitations, also from a numerical perspective, and illustrates its application with some examples. Whenever pertinent, the interconnection between models is also discussed, in view of multi-scale hybrid approaches.
Increasing CAD system efficacy for lung texture analysis using a convolutional network
NASA Astrophysics Data System (ADS)
Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves
2016-03-01
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
NASA Astrophysics Data System (ADS)
Chatterjee, Subhasri; Das, Nandan K.; Kumar, Satish; Mohapatra, Sonali; Pradhan, Asima; Panigrahi, Prasanta K.; Ghosh, Nirmalya
2013-02-01
Multi-resolution analysis on the spatial refractive index inhomogeneities in the connective tissue regions of human cervix reveals clear signature of multifractality. We have thus developed an inverse analysis strategy for extraction and quantification of the multifractality of spatial refractive index fluctuations from the recorded light scattering signal. The method is based on Fourier domain pre-processing of light scattering data using Born approximation, and its subsequent analysis through Multifractal Detrended Fluctuation Analysis model. The method has been validated on several mono- and multi-fractal scattering objects whose self-similar properties are user controlled and known a-priori. Following successful validation, this approach has initially been explored for differentiating between different grades of precancerous human cervical tissues.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...
Understanding Prairie Fen Hydrology - a Hierarchical Multi-Scale Groundwater Modeling Approach
NASA Astrophysics Data System (ADS)
Sampath, P.; Liao, H.; Abbas, H.; Ma, L.; Li, S.
2012-12-01
Prairie fens provide critical habitat to more than 50 rare species and significantly contribute to the biodiversity of the upper Great Lakes region. The sustainability of these globally unique ecosystems, however, requires that they be fed by a steady supply of pristine, calcareous groundwater. Understanding the hydrology that supports the existence of such fens is essential in preserving these valuable habitats. This research uses process-based multi-scale groundwater modeling for this purpose. Two fen-sites, MacCready Fen and Ives Road Fen, in Southern Michigan were systematically studied. A hierarchy of nested steady-state models was built for each fen-site to capture the system's dynamics at spatial scales ranging from the regional groundwater-shed to the local fens. The models utilize high-resolution Digital Elevation Models (DEM), National Hydrologic Datasets (NHD), a recently-assembled water-well database, and results from a state-wide groundwater mapping project to represent the complex hydro-geological and stress framework. The modeling system simulates both shallow glacial and deep bedrock aquifers as well as the interaction between surface water and groundwater. Aquifer heterogeneities were explicitly simulated with multi-scale transition probability geo-statistics. A two-way hydraulic head feedback mechanism was set up between the nested models, such that the parent models provided boundary conditions to the child models, and in turn the child models provided local information to the parent models. A hierarchical mass budget analysis was performed to estimate the seepage fluxes at the surface water/groundwater interfaces and to assess the relative importance of the processes at multiple scales that contribute water to the fens. The models were calibrated using observed base-flows at stream gauging stations and/or static water levels at wells. Three-dimensional particle tracking was used to predict the sources of water to the fens. We observed from the multi-scale simulations that the water system that supports the fens is a much larger, more connected, and more complex one than expected. The water in the fen can be traced back to a network of sources, including lakes and wetlands at different elevations, which are connected to a regional mound through a "cascade delivery mechanism". This "master recharge area" is the ultimate source of water not only to the fens in its vicinity, but also for many major rivers and aquifers. The implication of this finding is that prairie fens must be managed as part of a much larger, multi-scale groundwater system and we must consider protection of the shorter and long-term water sources. This will require a fundamental reassessment of our current approach to fen conservation, which is primarily based on protection of individual fens and their immediate surroundings. Clearly, in the future we must plan for conservation of the broad recharge areas and the multiple fen complexes they support.
Multi-scale seismic tomography of the Merapi-Merbabu volcanic complex, Indonesia
NASA Astrophysics Data System (ADS)
Mujid Abdullah, Nur; Valette, Bernard; Potin, Bertrand; Ramdhan, Mohamad
2017-04-01
Merapi-Merbabu volcanic complex is the most active volcano located on Java Island, Indonesia, where the Indian plate subducts beneath Eurasian plate. We present a preliminary study of a multi-scale seismic tomography of the substructures of the volcanic complex. The main objective of our study is to image the feeding paths of the volcanic complex at an intermediate scale by using the data from the dense network (about 5 km spacing) constituted by 53 stations of the French-Indonesian DOMERAPI experiment complemented by the data of the German-Indonesian MERAMEX project (134 stations) and of the Indonesia Tsunami Early Warning System (InaTEWS) located in the vicinity of the complex. The inversion was performed using the INSIGHT algorithm, which follows a non-linear least squares approach based on a stochastic description of data and model. In total, 1883 events and 41846 phases (26647 P and 15199 S) have been processed, and a two-scale approach was adopted. The model obtained at regional scale is consistent with the previous studies. We selected the most reliable regional model as a prior model for the local tomography performed with a variant of the INSIGHT code. The algorithm of this code is based on the fact that inverting differences of data when transporting the errors in probability is equivalent to inverting initial data while introducing specific correlation terms in the data covariance matrix. The local tomography provides images of the substructure of the volcanic complex with a sufficiently good resolution to allow identification of a probable magma chamber at about 20 km.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samin, Adib J.; Zhang, Jinsuo
In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlomore » simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.« less
NASA Astrophysics Data System (ADS)
Kebede, Abiy S.; Nicholls, Robert J.; Allan, Andrew; Arto, Inaki; Cazcarro, Ignacio; Fernandes, Jose A.; Hill, Chris T.; Hutton, Craig W.; Kay, Susan; Lawn, Jon; Lazar, Attila N.; Whitehead, Paul W.
2017-04-01
Coastal deltas are home for over 500 million people globally, and they have been identified as one of the most vulnerable coastal environments during the 21st century. They are susceptible to multiple climatic (e.g., sea-level rise, storm surges, change in temperature and precipitation) and socio-economic (e.g., human-induced subsidence, population and urbanisation changes, GDP growth) drivers of change. These drivers also operate at multiple scales, ranging from local to global and short- to long-term. This highlights the complex challenges deltas face in terms of both their long-term sustainability as well as the well-being of their residents and the health of ecosystems that support the livelihood of large (often very poor) population under uncertain changing conditions. A holistic understanding of these challenges and the potential impacts of future climate and socio-economic changes is central for devising robust adaptation policies. Scenario analysis has long been identified as a strategic management tool to explore future climate change and its impacts for supporting robust decision-making under uncertainty. This work presents the overall scenario framework, methodology, and processes adopted for the development of scenarios in the DECCMA* project. DECCMA is analysing the future of three deltas in South Asia and West Africa: (i) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India), (ii) the Mahanadi delta (India), and (iii) the Volta delta (Ghana). This includes comparisons between these three deltas. Hence, the scenario framework comprises a multi-scale hybrid approach, with six levels of scenario considerations: (i) global (climate change, e.g., sea-level rise, temperature change; and socio-economic assumptions, e.g., population and urbanisation changes, GDP growth); (ii) regional catchments (e.g., river flow modelling), (iii) regional seas (e.g., fisheries modelling), (iv) regional politics (e.g., transboundary disputes), (v) national (e.g., socio-economic factors), and (vi) delta-scale (e.g., future adaptation and migration policies) scenarios. The framework includes and combines expert-based and participatory approaches and provides improved specification of the role of scenarios to analyse the future state of adaptation and migration across the three deltas. It facilitates the development of appropriate and consistent endogenous and exogenous scenario futures: (i) at the delta-scale, (ii) across all deltas, and (iii) with wider climate change, environmental change, and adaptation & migration research. Key words: Coastal deltas, sea-level rise, migration and adaptation, multi-scale scenarios, participatory approach *DECCMA (Deltas, Vulnerability & Climate Change: Migration & Adaptation) project is part of the Collaborative ADAPTATION Research Initiative in Africa and Asia (CARIAA), with financial support from the UK Government's Department for International Development (DFID) and the International Development Research Centre (IDRC), Canada.
Alternative transitions between existing representations in multi-scale maps
NASA Astrophysics Data System (ADS)
Dumont, Marion; Touya, Guillaume; Duchêne, Cécile
2018-05-01
Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.
Simulations of ecosystem hydrological processes using a unified multi-scale model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Liu, Chongxuan; Fang, Yilin
2015-01-01
This paper presents a unified multi-scale model (UMSM) that we developed to simulate hydrological processes in an ecosystem containing both surface water and groundwater. The UMSM approach modifies the Navier–Stokes equation by adding a Darcy force term to formulate a single set of equations to describe fluid momentum and uses a generalized equation to describe fluid mass balance. The advantage of the approach is that the single set of the equations can describe hydrological processes in both surface water and groundwater where different models are traditionally required to simulate fluid flow. This feature of the UMSM significantly facilitates modelling ofmore » hydrological processes in ecosystems, especially at locations where soil/sediment may be frequently inundated and drained in response to precipitation, regional hydrological and climate changes. In this paper, the UMSM was benchmarked using WASH123D, a model commonly used for simulating coupled surface water and groundwater flow. Disney Wilderness Preserve (DWP) site at the Kissimmee, Florida, where active field monitoring and measurements are ongoing to understand hydrological and biogeochemical processes, was then used as an example to illustrate the UMSM modelling approach. The simulations results demonstrated that the DWP site is subject to the frequent changes in soil saturation, the geometry and volume of surface water bodies, and groundwater and surface water exchange. All the hydrological phenomena in surface water and groundwater components including inundation and draining, river bank flow, groundwater table change, soil saturation, hydrological interactions between groundwater and surface water, and the migration of surface water and groundwater interfaces can be simultaneously simulated using the UMSM. Overall, the UMSM offers a cross-scale approach that is particularly suitable to simulate coupled surface and ground water flow in ecosystems with strong surface water and groundwater interactions.« less
Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B
2018-03-05
Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
Thermooptic two-mode interference device for reconfigurable quantum optic circuits
NASA Astrophysics Data System (ADS)
Sahu, Partha Pratim
2018-06-01
Reconfigurable large-scale integrated quantum optic circuits require compact component having capability of accurate manipulation of quantum entanglement for quantum communication and information processing applications. Here, a thermooptic two-mode interference coupler has been introduced as a compact component for generation of reconfigurable complex multi-photons quantum interference. Both theoretical and experimental approaches are used for the demonstration of two-photon and four-photon quantum entanglement manipulated with thermooptic phase change in TMI region. Our results demonstrate complex multi-photon quantum interference with high fabrication tolerance and quantum fidelity in smaller dimension than previous thermooptic Mach-Zehnder implementations.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.; Molvig, K.
2015-09-01
In this study, we demonstrate a fully implicit algorithm for the multi-species, multidimensional Rosenbluth-Fokker-Planck equation which is exactly mass-, momentum-, and energy-conserving, and which preserves positivity. Unlike most earlier studies, we base our development on the Rosenbluth (rather than Landau) form of the Fokker-Planck collision operator, which reduces complexity while allowing for an optimal fully implicit treatment. Our discrete conservation strategy employs nonlinear constraints that force the continuum symmetries of the collision operator to be satisfied upon discretization. We converge the resulting nonlinear system iteratively using Jacobian-free Newton-Krylov methods, effectively preconditioned with multigrid methods for efficiency. Single- and multi-species numerical examples demonstrate the advertised accuracy properties of the scheme, and the superior algorithmic performance of our approach. In particular, the discretization approach is numerically shown to be second-order accurate in time and velocity space and to exhibit manifestly positive entropy production. That is, H-theorem behavior is indicated for all the examples we have tested. The solution approach is demonstrated to scale optimally with respect to grid refinement (with CPU time growing linearly with the number of mesh points), and timestep (showing very weak dependence of CPU time with time-step size). As a result, the proposed algorithm delivers several orders-of-magnitude speedup vs. explicit algorithms.
Multi-scale remote sensing of coral reefs
Andréfouët, Serge; Hochberg, E.J.; Chevillon, Christophe; Muller-Karger, Frank E.; Brock, John C.; Hu, Chuanmin
2005-01-01
In this chapter we present how both direct and indirect remote sensing can be integrated to address two major coral reef applications - coral bleaching and assessment of biodiversity. This approach reflects the current non-linear integration of remote sensing for environmental assessment of coral reefs, resulting from a rapid increase in available sensors, processing methods and interdisciplinary collaborations (Andréfouët and Riegl, 2004). Moreover, this approach has greatly benefited from recent collaborations of once independent investigations (e.g., benthic ecology, remote sensing, and numerical modeling).
Andrew Fall; B. Sturtevant; M.-J. Fortin; M. Papaik; F. Doyon; D. Morgan; K. Berninger; C. Messier
2010-01-01
The complexity and multi-scaled nature of forests poses significant challenges to understanding and management. Models can provide useful insights into process and their interactions, and implications of alternative management options. Most models, particularly scientific models, focus on a relatively small set of processes and are designed to operate within a...
A Smart Partnership: Integrating Educational Technology for Underserved Children in India
ERIC Educational Resources Information Center
Charania, Amina; Davis, Niki
2016-01-01
This paper explores the evolution of a large multi-stakeholder partnership that has grown since 2011 to scale deep engagement with learning through technology and decrease the digital divide for thousands of underserved school children in India. Using as its basis a case study of an initiative called integrated approach to technology in education…
A Multi-Dimensional Approach to Measuring News Media Literacy
ERIC Educational Resources Information Center
Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa
2015-01-01
Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…
ERIC Educational Resources Information Center
Denny, Joanna Hope; Hallam, Rena; Homer, Karen
2012-01-01
Research Findings: A statewide study of preschool classroom quality was conducted using 3 distinct classroom observation measures in order to inform a statewide quality rating system. Findings suggested that Tennessee preschool classrooms were approaching "good" quality on the Early Childhood Environment Rating Scale-Revised (ECERS-R)…
The impact of climate change on surface level ozone is examined through a multi-scale modeling effort that linked global and regional climate models to drive air quality model simulations. Results are quantified in terms of the Relative Response Factor (RRFE), which es...
Math Snacks: Using Animations and Games to Fill the Gaps in Mathematics
ERIC Educational Resources Information Center
Valdiz, Alfred; Trujillo, Karen; Wiburg, Karin
2013-01-01
Math Snacks animations and support materials were developed for use on the web and mobile technologies to teach ratio, proportion, scale factor, and number line concepts using a multi-modal approach. Included in Math Snacks are: Animations which promote the visualization of a concept image; written lessons which provide cognitive complexity for…
Basin of Mexico: A history of watershed mismanagement
Luis A. Bojorquez Tapia; Exequiel Ezcurra; Marisa Mazari-Hiriart; Salomon Diaz; Paola Gomez; Georgina Alcantar; Daniela Megarejo
2000-01-01
Mexico City Metropolitan Zone (MCMZ) is located within the Basin of Mexico. Because of its large population and demand for natural resources, several authors have questioned the viability of the city, especially in terms of water resources. These are reviewed at the regional and the local scales. It is concluded that a multi-basin management approach is necessary to...
Lin, Yu-Ching; Yu, Nan-Ying; Jiang, Ching-Fen; Chang, Shao-Hsia
2018-06-02
In this paper, we introduce a newly developed multi-scale wavelet model for the interpretation of surface electromyography (SEMG) signals and validate the model's capability to characterize changes in neuromuscular activation in cases with myofascial pain syndrome (MPS) via machine learning methods. The SEMG data collected from normal (N = 30; 27 women, 3 men) and MPS subjects (N = 26; 22 women, 4 men) were adopted for this retrospective analysis. SMEGs were measured from the taut-band loci on both sides of the trapezius muscle on the upper back while he/she conducted a cyclic bilateral backward shoulder extension movement within 1 min. Classification accuracy of the SEMG model to differentiate MPS patients from normal subjects was 77% using template matching and 60% using K-means clustering. Classification consistency between the two machine learning methods was 87% in the normal group and 93% in the MPS group. The 2D feature graphs derived from the proposed multi-scale model revealed distinct patterns between normal subjects and MPS patients. The classification consistency using template matching and K-means clustering suggests the potential of using the proposed model to characterize interference pattern changes induced by MPS. Copyright © 2018. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
A multi-disciplinary approach for the structural monitoring of Cultural Heritages in a seismic area
NASA Astrophysics Data System (ADS)
Fabrizia Buongiorno, Maria; Musacchio, Massimo; Guerra, Ignazio; Porco, Giacinto; Stramondo, Salvatore; Casula, Giuseppe; Caserta, Arrigo; Speranza, Fabio; Doumaz, Fawzi; Giovanna Bianchi, Maria; Luzi, Guido; Ilaria Pannaccione Apa, Maria; Montuori, Antonio; Gaudiosi, Iolanda; Vecchio, Antonio; Gervasi, Anna; Bonali, Elena; Romano, Dolores; Falcone, Sergio; La Piana, Carmelo
2014-05-01
In the recent years, the concepts of seismic risk vulnerability and structural health monitoring have become very important topics in the field of both structural and civil engineering for the identification of appropriate risk indicators and risk assessment methodologies in Cultural Heritages monitoring. The latter, which includes objects, building and sites with historical, architectural and/or engineering relevance, concerns the management, the preservation and the maintenance of the heritages within their surrounding environmental context, in response to climate changes and natural hazards (e.g. seismic, volcanic, landslides and flooding hazards). Within such a framework, the complexity and the great number of variables to be considered require a multi-disciplinary approach including strategies, methodologies and tools able to provide an effective monitoring of Cultural Heritages form both scientific and operational viewpoints. Based on this rationale, in this study, an advanced, technological and operationally-oriented approach is presented and tested, which enables measuring and monitoring Cultural Heritage conservation state and geophysical/geological setting of the area, in order to mitigate the seismic risk of the historical public goods at different spatial scales*. The integration between classical geophysical methods with new emerging sensing techniques enables a multi-depth, multi-resolution, and multi-scale monitoring in both space and time. An integrated system of methodologies, instrumentation and data-processing approaches for non-destructive Cultural Heritage investigations is proposed, which concerns, in detail, the analysis of seismogenetic sources, the geological-geotechnical setting of the area and site seismic effects evaluation, proximal remote sensing techniques (e.g. terrestrial laser scanner, ground-based radar systems, thermal cameras), high-resolution aerial and satellite-based remote sensing methodologies (e.g. aeromagnetic surveys, synthetic aperture radar, optical, multispectral and panchromatic measurements), static and dynamic structural health monitoring analysis (e.g. screening tests with georadar, sonic instruments, sclerometers and optic fibers). The final purpose of the proposed approach is the development of an investigation methodology for short- and long-term Cultural Heritages preservation in response to seismic stress, which has specific features of scalability, modularity and exportability for every possible monitoring configuration. Moreover, it allows gathering useful information to furnish guidelines for Institution and local Administration to plan consolidation actions and therefore prevention activity. Some preliminary results will be presented for the test site of Calabria Region, where some architectural heritages have been properly selected as case studies for monitoring purposes. *The present work is supported and funded by Ministero dell'Università, dell'Istruzione e della Ricerca (MIUR) under the research project PON01-02710 "MASSIMO" - "Monitoraggio in Area Sismica di Sistemi Monumentali".
Large scale modulation of high frequency acoustic waves in periodic porous media.
Boutin, Claude; Rallu, Antoine; Hans, Stephane
2012-12-01
This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.
Multi -omics and metabolic modelling pipelines: challenges and tools for systems microbiology.
Fondi, Marco; Liò, Pietro
2015-02-01
Integrated -omics approaches are quickly spreading across microbiology research labs, leading to (i) the possibility of detecting previously hidden features of microbial cells like multi-scale spatial organization and (ii) tracing molecular components across multiple cellular functional states. This promises to reduce the knowledge gap between genotype and phenotype and poses new challenges for computational microbiologists. We underline how the capability to unravel the complexity of microbial life will strongly depend on the integration of the huge and diverse amount of information that can be derived today from -omics experiments. In this work, we present opportunities and challenges of multi -omics data integration in current systems biology pipelines. We here discuss which layers of biological information are important for biotechnological and clinical purposes, with a special focus on bacterial metabolism and modelling procedures. A general review of the most recent computational tools for performing large-scale datasets integration is also presented, together with a possible framework to guide the design of systems biology experiments by microbiologists. Copyright © 2015. Published by Elsevier GmbH.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Empirical behavior of a world stock index from intra-day to monthly time scales
NASA Astrophysics Data System (ADS)
Breymann, W.; Lüthi, D. R.; Platen, E.
2009-10-01
Most of the papers that study the distributional and fractal properties of financial instruments focus on stock prices or foreign exchange rates. This typically leads to mixed results concerning the distributions of log-returns and some multi-fractal properties of exchange rates, stock prices, and regional indices. This paper uses a well diversified world stock index as the central object of analysis. Such index approximates the growth optimal portfolio, which is demonstrated under the benchmark approach, it is the ideal reference unit for studying basic securities. When denominating this world index in units of a given currency, one measures the movements of the currency against the entire market. This provides a least disturbed observation of the currency dynamics. In this manner, one can expect to disentangle, e.g., the superposition of the two currencies involved in an exchange rate. This benchmark approach to the empirical analysis of financial data allows us to establish remarkable stylized facts. Most important is the observation that the repeatedly documented multi-fractal appearance of financial time series is very weak and much less pronounced than the deviation of the mono-scaling properties from Brownian-motion type scaling. The generalized Hurst exponent H(2) assumes typical values between 0.55 and 0.6. Accordingly, autocorrelations of log-returns decay according to a power law, and the quadratic variation vanishes when going to vanishing observation time step size. Furthermore, one can identify the Student t distribution as the log-return distribution of a well-diversified world stock index for long time horizons when a long enough data series is used for estimation. The study of dependence properties, finally, reveals that jumps at daily horizon originate primarily in the stock market while at 5min horizon they originate in the foreign exchange market. The principal message of the empirical analysis is that there is evidence that a diffusion model without multi-scaling could reasonably well model the dynamics of a broadly diversified world stock index. in here
Multi-Mounted X-Ray Computed Tomography.
Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng
2016-01-01
Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT.
NASA Astrophysics Data System (ADS)
Nogueira, Miguel
2018-02-01
Spectral analysis of global-mean precipitation, P, evaporation, E, precipitable water, W, and surface temperature, Ts, revealed significant variability from sub-daily to multi-decadal time-scales, superposed on high-amplitude diurnal and yearly peaks. Two distinct regimes emerged from a transition in the spectral exponents, β. The weather regime covering time-scales < 10 days with β ≥ 1; and the macroweather regime extending from a few months to a few decades with 0 <β <1. Additionally, the spectra showed a generally good statistical agreement amongst several different model- and satellite-based datasets. Detrended cross-correlation analysis (DCCA) revealed three important results which are robust across all datasets: (1) Clausius-Clapeyron (C-C) relationship is the dominant mechanism of W non-periodic variability at multi-year time-scales; (2) C-C is not the dominant control of W, P or E non-periodic variability at time-scales below about 6 months, where the weather regime is approached and other mechanisms become important; (3) C-C is not a dominant control for P or E over land throughout the entire time-scale range considered. Furthermore, it is suggested that the atmosphere and oceans start to act as a single coupled system at time-scales > 1-2 years, while at time-scales < 6 months they are not the dominant drivers of each other. For global-ocean and full-globe averages, ρDCCA showed large spread of the C-C importance for P and E variability amongst different datasets at multi-year time-scales, ranging from negligible (< 0.3) to high ( 0.6-0.8) values. Hence, state-of-the-art climate datasets have significant uncertainties in the representation of macroweather precipitation and evaporation variability and its governing mechanisms.
Design of a Minimum Surface-Effect Three Degree-of-Freedom Micromanipulator
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Speich, John E.
1997-01-01
This paper describes the fundamental physical motivations for small-scale minimum surface-effect design, and presents a three degree-of-freedom micromanipulator design that incorporates a minimum surface-effect approach. The primary focus of the design is the split-tube flexure, a unique small-scale revolute joint that exhibits a considerably larger range of motion and significantly better multi-axis revolute joint characteristics than a conventional flexure. The development of this joint enables the implementation of a small-scale spatially-loaded revolute joint-based manipulator with well-behaved kinematic characteristics and without the backlash and stick-slip behavior that would otherwise prevent precision control
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
Multi-scale curvature for automated identification of glaciated mountain landscapes☆
Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar
2014-01-01
Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes. PMID:24748703
A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases
NASA Astrophysics Data System (ADS)
Zyvoloski, G.; Lucia, A.; Lewis, K. C.
2011-12-01
The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water, and NaCl in permafrost conditions are presented to illustrate the predictive capabilities of the multi-scale GHC equation. In particular, we show that the GHC equation correctly predicts 1) The density of 1h ice and methane hydrate to within 1%. 2) The melting curve for hexagonal ice. 3) The hydrate-gas phase co-existence curve. 4) Various phase equilibrium involving ice and hydrate phases. We also show that the GHC equation approach can be readily incorporated into subsurface flow simulation programs like FEHM to predict the behavior of permafrost and other reservoirs where ice and/or hydrates are present. Many geometric illustrations are used to elucidate key concepts. References A. Lucia, A Multi-Scale Gibbs Helmholtz Constrained Cubic Equation of State. J. Thermodynamics: Special Issue on Advances in Gas Hydrate Thermodynamics and Transport Properties. Available on-line [doi:10.1155/2010/238365]. A. Lucia, B.M. Bonk, A. Roy and R.R. Waterman, A Multi-Scale Framework for Multi-Phase Equilibrium Flash. Comput. Chem. Engng. In press.
Goal-oriented robot navigation learning using a multi-scale space representation.
Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A
2015-12-01
There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Crowe, A S; Booty, W G
1995-05-01
A multi-level pesticide assessment methodology has been developed to permit regulatory personnel to undertake a variety of assessments on the potential for pesticide used in agricultural areas to contaminate the groundwater regime at an increasingly detailed geographical scale of investigation. A multi-level approach accounts for a variety of assessment objectives and detail required in the assessment, the restrictions on the availability and accuracy of data, the time available to undertake the assessment, and the expertise of the decision maker. The level 1: regional scale is designed to prioritize districts having a potentially high risk for groundwater contamination from the application of a specific pesticide for a particular crop. The level 2: local scale is used to identify critical areas for groundwater contamination, at a soil polygon scale, within a district. A level 3: soil profile scale allows the user to evaluate specific factors influencing pesticide leaching and persistence, and to determine the extent and timing of leaching, through the simulation of the migration of a pesticide within a soil profile. Because of the scale of investigation, limited amount of data required, and qualitative nature of the assessment results, the level 1 and level 2 assessment are designed primarily for quick and broad guidance related to management practices. A level 3 assessment is more complex, requires considerably more data and expertise on the part of the user, and hence is designed to verify the potential for contamination identified during the level 1 or 2 assessment. The system combines environmental modelling, geographical information systems, extensive databases, data management systems, expert systems, and pesticide assessment models, to form an environmental information system for assessing the potential for pesticides to contaminate groundwater.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Gao, Jie; Roan, Esra; Williams, John L
2015-01-01
The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales.
A Coastal Risk Assessment Framework Tool to Identify Hotspots at the Regional Scale
NASA Astrophysics Data System (ADS)
Van Dongeren, A.; Viavattene, C.; Jimenez, J. A.; Ferreira, O.; Bolle, A.; Owen, D.; Priest, S.
2016-02-01
Extreme events in combination with an increasing population on the coast, future sea level rise and the deterioration of coastal defences can lead to catastrophic consequences for coastal communities and their activities. The Resilience-Increasing Strategies for Coasts - toolkit (RISC-KIT) FP7 EU project is producing a set of EU-coherent open-source and open-access tools in support of coastal managers and decision-makers. This paper presents one of these tools, the Coastal Risk Assessment Framework (CRAF) which assesses coastal risk at a regional scale to identify potential impact hotspots for more detailed assessment. Applying a suite of complex models at a full and detailed regional scale remains difficult and may not be efficient, therefore a 2-phase approach is adopted. CRAF Phase 1 is a screening process based on a coastal-index approach delimiting several hotspots in alongshore length by assessing the potential exposure for every kilometre along the coast. CRAF Phase 2 uses a suite of more complex modelling process (including X-beach 1D, inundation model, impact assessment and Multi-Criteria Analysis approach) to analyse and compare the risks between the aforementioned identified hotspots. Results of its application are compared on 3 European Case Studies, the Flemish highly protected low-lying coastal plain with important urbanization and harbors, a Portuguese coastal lagoon protected by a multi-inlet barrier system, the highly urbanized Catalonian coast with touristic activities at threat. The flexibility of the tool allows tailoring the comparative analysis to these different contexts and to adapt to the quality of resources and data available. Key lessons will be presented.
Verger, Eric O.; Perignon, Marlene; El Ati, Jalila; Darmon, Nicole; Dop, Marie-Claude; Drogué, Sophie; Dury, Sandrine; Gaillard, Cédric; Sinfort, Carole; Amiot, Marie-Josèphe; Amiot, Marie-Josèphe
2018-01-01
Mediterranean countries are undergoing dietary and nutritional changes that affect their inhabitants' health, while facing massive environmental challenges. The increasing demand of water in agriculture, the capacity to maintain local food production, and the growing dependence on food imports are interconnected issues that must be addressed to ensure food security and nutrition in the Mediterranean region. Here, we present the conceptual framework and methodologies developed by the MEDINA-Study Group for rethinking food systems toward sustainable consumption and production modes. Based on its multidisciplinary expertise, the MEDINA-Study Group designed a “fork-to-farm” multi-scale approach, stemming from current dietary habits and examining how some options to nutritionally improve these habits might affect the food systems. This approach was developed for research activities in the South of France and Tunisia, two areas with very different diet-agriculture-environment nexus. The conceptual framework is based on the analysis of elements of the food systems (from consumption to production) at different levels (individual, household, regional and national levels). The methods include: (i) modeling options of dietary changes at different scales, in order to nutritionally optimize food consumption-production without increasing the environmental impact, (ii) translating the best-choice changes into possible policy actions, (iii) testing the acceptability and feasibility of these actions with several stakeholders, and (iv) producing guidelines for sustainable food choices and production. The MEDINA-Study Group identified additional issues that could be included in a future framework to help designing ambitious agricultural, food and health policies in the Mediterranean region. PMID:29872660
Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa
2016-01-01
Vascular calcification results in stiffening of the aorta and is associated with hypertension and atherosclerosis. Atherogenesis is a complex, multifactorial, and systemic process; the result of a number of factors, each operating simultaneously at several spatial and temporal scales. The ability to predict sites of atherogenesis would be of great use to clinicians in order to improve diagnostic and treatment planning. In this paper, we present a mathematical model as a tool to understand why atherosclerotic plaque and calcifications occur in specific locations. This model is then used to analyze vascular calcification and atherosclerotic areas in an aortic dissection patient using a mechanistic, multi-scale modeling approach, coupling patient-specific, fluid-structure interaction simulations with a model of endothelial mechanotransduction. A number of hemodynamic factors based on state-of-the-art literature are used as inputs to the endothelial permeability model, in order to investigate plaque and calcification distributions, which are compared with clinical imaging data. A significantly improved correlation between elevated hydraulic conductivity or volume flux and the presence of calcification and plaques was achieved by using a shear index comprising both mean and oscillatory shear components (HOLMES) and a non-Newtonian viscosity model as inputs, as compared to widely used hemodynamic indicators. The proposed approach shows promise as a predictive tool. The improvements obtained using the combined biomechanical/biochemical modeling approach highlight the benefits of mechanistic modeling as a powerful tool to understand complex phenomena and provides insight into the relative importance of key hemodynamic parameters. PMID:27445834
Gao, Jie; Roan, Esra; Williams, John L.
2015-01-01
The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales. PMID:25885547
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
An integrated assessment of location-dependent scaling for microalgae biofuel production facilities
Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...
2014-06-19
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
NASA Astrophysics Data System (ADS)
Potapov, A. A.
2017-11-01
The main purpose of this work is to interpret the main directions of radio physics, radio engineering and radio location in “fractal” language that makes new ways and generalizations on future promising radio systems. We introduce a new kind and approach of up-to-date radiolocation: fractal-scaling or scale-invariant radiolocation. The new topologic signs and methods of detecting the low-contrast objects against the high-intensity noise background are presented. It leads to basic changes in the theoretical radiolocation structure itself and also in its mathematical apparatus. The fractal radio systems conception, sampling topology, global fractal-scaling approach and the fractal paradigm underlie the scientific direction established by the author in Russia and all over the world for the first time ever.
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Understanding protected area resilience: a multi-scale, social-ecological approach.
Cumming, Graeme S; Allen, Craig R; Ban, Natalie C; Biggs, Duan; Biggs, Harry C; Cumming, David H M; De Vos, Alta; Epstein, Graham; Etienne, Michel; Maciejewski, Kristine; Mathevet, Raphaël; Moore, Christine; Nenadovic, Mateja; Schoon, Michael
2015-03-01
Protected areas (PAs) remain central to the conservation of biodiversity. Classical PAs were conceived as areas that would be set aside to maintain a natural state with minimal human influence. However, global environmental change and growing cross-scale anthropogenic influences mean that PAs can no longer be thought of as ecological islands that function independently of the broader social-ecological system in which they are located. For PAs to be resilient (and to contribute to broader social-ecological resilience), they must be able to adapt to changing social and ecological conditions over time in a way that supports the long-term persistence of populations, communities, and ecosystems of conservation concern. We extend Ostrom's social-ecological systems framework to consider the long-term persistence of PAs, as a form of land use embedded in social-ecological systems, with important cross-scale feedbacks. Most notably, we highlight the cross-scale influences and feedbacks on PAs that exist from the local to the global scale, contextualizing PAs within multi-scale social-ecological functional landscapes. Such functional landscapes are integral to understand and manage individual PAs for long-term sustainability. We illustrate our conceptual contribution with three case studies that highlight cross-scale feedbacks and social-ecological interactions in the functioning of PAs and in relation to regional resilience. Our analysis suggests that while ecological, economic, and social processes are often directly relevant to PAs at finer scales, at broader scales, the dominant processes that shape and alter PA resilience are primarily social and economic.
Understanding protected area resilience: a multi-scale, social-ecological approach
Cumming, Graeme S.; Allen, Craig R.; Ban, Natalie C.; Biggs, Duan; Biggs, Harry C.; Cumming, David H.M; De Vos, Alta; Epstein, Graham; Etienne, Michel; Maciejewski, Kristine; Mathevet, Raphael; Moore, Christine; Nenadovic, Mateja; Schoon, Michael
2015-01-01
Protected areas (PAs) remain central to the conservation of biodiversity. Classical PAs were conceived as areas that would be set aside to maintain a natural state with minimal human influence. However, global environmental change and growing cross-scale anthropogenic influences mean that PAs can no longer be thought of as ecological islands that function independently of the broader social-ecological system in which they are located. For PAs to be resilient (and to contribute to broader social-ecological resilience), they must be able to adapt to changing social and ecological conditions over time in a way that supports the long-term persistence of populations, communities, and ecosystems of conservation concern. We extend Ostrom's social-ecological systems framework to consider the long-term persistence of PAs, as a form of land use embedded in social-ecological systems, with important cross-scale feedbacks. Most notably, we highlight the cross-scale influences and feedbacks on PAs that exist from the local to the global scale, contextualizing PAs within multi-scale social-ecological functional landscapes. Such functional landscapes are integral to understand and manage individual PAs for long-term sustainability. We illustrate our conceptual contribution with three case studies that highlight cross-scale feedbacks and social-ecological interactions in the functioning of PAs and in relation to regional resilience. Our analysis suggests that while ecological, economic, and social processes are often directly relevant to PAs at finer scales, at broader scales, the dominant processes that shape and alter PA resilience are primarily social and economic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N
Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less
Sablah, Mawuli; Baker, Shawn K; Badham, Jane; De Zayas, Alfred
2013-11-01
The scaling up nutrition (SUN) policy framework requires extensive public–private partnership (PPP). Malnutrition is multi-dimensional and should engage multi-sectoral platforms. The SUN policy however did not fully embrace the dynamics of harnessing PPP. The objectives of the present paper are to highlight the reasons for the apprehension around PPP and illustrate how effective coordination of PPP in West Africa has contributed to implementing large-scale food fortification with micronutrients as a complementary nutrition intervention. The experience of Helen Keller International (HKI) in scaling up food fortification was emphasised with understanding of the factors contributing to indifference by the international community to private sector contribution to SUN. The roles of different stakeholders in a PPP are elucidated and the process linked to who, why and how to engage. The private sector provides direct nutrition services while the public sector creates the enabling environment for the private sector to thrive on social values. Through this approach fortified vegetable oil and wheat flour are now reaching over 70% of the population in West Africa. As a neutral broker HKI coordinated and facilitated dialogue among the different stakeholders. The core competencies of each stakeholder were harnessed and each partner was held accountable. It concludes that multi-sectoral relationship must be transparent, equitable and based on shared mutual interests. The rules and values of PPP offer opportunities for SUN.
An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.
Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P
2013-01-01
The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.
The Structure of Borders in a Small World
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970
The structure of borders in a small world.
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-11-18
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.
Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition
NASA Astrophysics Data System (ADS)
LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.
2013-12-01
Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites with depths of exploration ranging from 150 to 450 m. The sites included shallow geothermal sites near Reno Nevada, Pomarance Italy, and Volterra Italy; a mineral exploration site near Timmins Quebec; and a landslide investigation near Vajont Dam in northern Italy. These sites provided a series of challenges in survey design and deployment including some extremely difficult terrain and a broad range of background resistivity and induced values. Despite these challenges, comparison of multi-source results to resistivity and induced polarization data collection with more traditional methods support the thesis that the multi-source approach is capable of providing substantial improvements in both depth of penetration and resolution over conventional approaches.
Cracks dynamics under tensional stress - a DEM approach
NASA Astrophysics Data System (ADS)
Debski, Wojciech; Klejment, Piotr; Kosmala, Alicja; Foltyn, Natalia; Szpindler, Maciej
2017-04-01
Breaking and fragmentation of solid materials is an extremely complex process involving scales ranging from an atomic scale (breaking inter-atomic bounds) up to thousands of kilometers in case of catastrophic earthquakes (in energy scale it ranges from single eV up to 1024 J). Such a large scale span of breaking processes opens lot of questions like, for example, scaling of breaking processes, existence of factors controlling final size of broken area, existence of precursors, dynamics of fragmentation, to name a few. The classical approach to study breaking process at seismological scales, i.e., physical processes in earthquake foci, is essentially based on two factors: seismic data (mostly) and the continuum mechanics (including the linear fracture mechanics). Such approach has been gratefully successful in developing kinematic (first) and dynamic (recently) models of seismic rupture and explaining many of earthquake features observed all around the globe. However, such approach will sooner or latter face a limitation due to a limited information content of seismic data and inherit limitations of the fracture mechanics principles. A way of avoiding this expected limitation is turning an attention towards a well established in physics method of computational simulations - a powerful branch of contemporary physics. In this presentation we discuss preliminary results of analysis of fracturing dynamics under external tensional forces using the Discrete Element Method approach. We demonstrate that even under a very simplified tensional conditions, the fragmentation dynamics is a very complex process, including multi-fracturing, spontaneous fracture generation and healing, etc. We also emphasis a role of material heterogeneity on the fragmentation process.
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.
2015-07-16
We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
Two-probe STM experiments at the atomic level.
Kolmer, Marek; Olszowski, Piotr; Zuzak, Rafal; Godlewski, Szymon; Joachim, Christian; Szymonski, Marek
2017-11-08
Direct characterization of planar atomic or molecular scale devices and circuits on a supporting surface by multi-probe measurements requires unprecedented stability of single atom contacts and manipulation of scanning probes over large, nanometer scale area with atomic precision. In this work, we describe the full methodology behind atomically defined two-probe scanning tunneling microscopy (STM) experiments performed on a model system: dangling bond dimer wire supported on a hydrogenated germanium (0 0 1) surface. We show that 70 nm long atomic wire can be simultaneously approached by two independent STM scanners with exact probe to probe distance reaching down to 30 nm. This allows direct wire characterization by two-probe I-V characteristics at distances below 50 nm. Our technical results presented in this work open a new area for multi-probe research, which can be now performed with precision so far accessible only by single-probe scanning probe microscopy (SPM) experiments.
Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images
NASA Astrophysics Data System (ADS)
Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow
2018-04-01
Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.
Hansen, Matt; Stehman, Steve; Loveland, Tom; Vogelmann, Jim; Cochrane, Mark
2009-01-01
Quantifying rates of forest-cover change is important for improved carbon accounting and climate change modeling, management of forestry and agricultural resources, and biodiversity monitoring. A practical solution to examining trends in forest cover change at global scale is to employ remotely sensed data. Satellite-based monitoring of forest cover can be implemented consistently across large regions at annual and inter-annual intervals. This research extends previous research on global forest-cover dynamics and land-cover change estimation to establish a robust, operational forest monitoring and assessment system. The approach integrates both MODIS and Landsat data to provide timely biome-scale forest change estimation. This is achieved by using annual MODIS change indicator maps to stratify biomes into low, medium and high change categories. Landsat image pairs can then be sampled within these strata and analyzed for estimating area of forest cleared.
Preparation of biomimetic nano-structured films with multi-scale roughness
NASA Astrophysics Data System (ADS)
Shelemin, A.; Nikitin, D.; Choukourov, A.; Kylián, O.; Kousal, J.; Khalakhan, I.; Melnichuk, I.; Slavínská, D.; Biederman, H.
2016-06-01
Biomimetic nano-structured films are valuable materials in various applications. In this study we introduce a fully vacuum-based approach for fabrication of such films. The method combines deposition of nanoparticles (NPs) by gas aggregation source and deposition of overcoat thin film that fixes the nanoparticles on a surface. This leads to the formation of nanorough surfaces which, depending on the chemical nature of the overcoat, may range from superhydrophilic to superhydrophobic. In addition, it is shown that by proper adjustment of the amount of NPs it is possible to tailor adhesive force on superhydrophobic surfaces. Finally, the possibility to produce NPs in a wide range of their size (45-240 nm in this study) makes it possible to produce surfaces not only with single scale roughness, but also with bi-modal or even multi-modal character. Such surfaces were found to be superhydrophobic with negligible water contact angle hysteresis and hence truly slippery.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments.
Roy, Nirmalya; Misra, Archan; Cook, Diane
2016-02-01
Activity recognition in smart environments is an evolving research problem due to the advancement and proliferation of sensing, monitoring and actuation technologies to make it possible for large scale and real deployment. While activities in smart home are interleaved, complex and volatile; the number of inhabitants in the environment is also dynamic. A key challenge in designing robust smart home activity recognition approaches is to exploit the users' spatiotemporal behavior and location, focus on the availability of multitude of devices capable of providing different dimensions of information and fulfill the underpinning needs for scaling the system beyond a single user or a home environment. In this paper, we propose a hybrid approach for recognizing complex activities of daily living (ADL), that lie in between the two extremes of intensive use of body-worn sensors and the use of ambient sensors. Our approach harnesses the power of simple ambient sensors (e.g., motion sensors) to provide additional 'hidden' context (e.g., room-level location) of an individual, and then combines this context with smartphone-based sensing of micro-level postural/locomotive states. The major novelty is our focus on multi-inhabitant environments, where we show how the use of spatiotemporal constraints along with multitude of data sources can be used to significantly improve the accuracy and computational overhead of traditional activity recognition based approaches such as coupled-hidden Markov models. Experimental results on two separate smart home datasets demonstrate that this approach improves the accuracy of complex ADL classification by over 30 %, compared to pure smartphone-based solutions.
Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments
Misra, Archan; Cook, Diane
2016-01-01
Activity recognition in smart environments is an evolving research problem due to the advancement and proliferation of sensing, monitoring and actuation technologies to make it possible for large scale and real deployment. While activities in smart home are interleaved, complex and volatile; the number of inhabitants in the environment is also dynamic. A key challenge in designing robust smart home activity recognition approaches is to exploit the users' spatiotemporal behavior and location, focus on the availability of multitude of devices capable of providing different dimensions of information and fulfill the underpinning needs for scaling the system beyond a single user or a home environment. In this paper, we propose a hybrid approach for recognizing complex activities of daily living (ADL), that lie in between the two extremes of intensive use of body-worn sensors and the use of ambient sensors. Our approach harnesses the power of simple ambient sensors (e.g., motion sensors) to provide additional ‘hidden’ context (e.g., room-level location) of an individual, and then combines this context with smartphone-based sensing of micro-level postural/locomotive states. The major novelty is our focus on multi-inhabitant environments, where we show how the use of spatiotemporal constraints along with multitude of data sources can be used to significantly improve the accuracy and computational overhead of traditional activity recognition based approaches such as coupled-hidden Markov models. Experimental results on two separate smart home datasets demonstrate that this approach improves the accuracy of complex ADL classification by over 30 %, compared to pure smartphone-based solutions. PMID:27042240
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
NASA Astrophysics Data System (ADS)
Vallam, P.; Qin, X. S.
2017-07-01
Flooding risk is increasing in many parts of the world and may worsen under climate change conditions. The accuracy of predicting flooding risk relies on reasonable projection of meteorological data (especially rainfall) at the local scale. The current statistical downscaling approaches face the difficulty of projecting multi-site climate information for future conditions while conserving spatial information. This study presents a combined Long Ashton Research Station Weather Generator (LARS-WG) stochastic weather generator and multi-site rainfall simulator RainSim (CLWRS) approach to investigate flow regimes under future conditions in the Kootenay Watershed, Canada. To understand the uncertainty effect stemming from different scenarios, the climate output is fed into a hydrologic model. The results showed different variation trends of annual peak flows (in 2080-2099) based on different climate change scenarios and demonstrated that the hydrological impact would be driven by the interaction between snowmelt and peak flows. The proposed CLWRS approach is useful where there is a need for projection of potential climate change scenarios.
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
This synthetic, multi-scale approach will generate a sequence of spatially explicit maps that will provide science guidance to support strategic decision-making regarding the spatially-distributed risk of, and possible adaptation to, the spread of invasive species at local to ...
Putting Ourselves in the Big Picture: A Sustainable Approach to Project Management for e-Learning
ERIC Educational Resources Information Center
Buchan, Janet
2010-01-01
In a case study of a large Australian university the metaphor of panarchy is used as a means of describing and understanding the complex interrelationships of multi-scale institutional projects and the influences of a variety factors on the potential success of e-learning initiatives. The concept of para-analysis is introduced as a management…
Integrating multi-scale data to create a virtual physiological mouse heart.
Land, Sander; Niederer, Steven A; Louch, William E; Sejersted, Ole M; Smith, Nicolas P
2013-04-06
While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle.
Integrating multi-scale data to create a virtual physiological mouse heart
Land, Sander; Niederer, Steven A.; Louch, William E.; Sejersted, Ole M.; Smith, Nicolas P.
2013-01-01
While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle. PMID:24427525
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
Where the Wild Things Are: Observational Constraints on Black Holes' Growth
NASA Astrophysics Data System (ADS)
Merloni, Andrea
2009-12-01
The physical and evolutionary relation between growing supermassive black holes (AGN) and host galaxies is currently the subject of intense research activity. Nevertheless, a deep theoretical understanding of such a relation is hampered by the unique multi-scale nature of the combined AGN-galaxy system, which defies any purely numerical, or semi-analytic approach. Various physical process active on different physical scales have signatures in different parts of the electromagnetic spectrum; thus, observations at different wavelengths and theoretical ideas all can contribute towards a ``large dynamic range'' view of the AGN phenomenon, capable of conceptually ``resolving'' the many scales involved. As an example, I will focus in this review on two major recent observational results on the cosmic evolution of supermassive black holes, focusing on the novel contribution given to the field by the COSMOS survey. First of all, I will discuss the evidence for the so-called ``downsizing'' in the AGN population as derived from large X-ray surveys. I will then present new constraints on the evolution of the black hole-galaxy scaling relation at 1
Near-Earth object hazardous impact: A Multi-Criteria Decision Making approach.
Sánchez-Lozano, J M; Fernández-Martínez, M
2016-11-16
The impact of a near-Earth object (NEO) may release large amounts of energy and cause serious damage. Several NEO hazard studies conducted over the past few years provide forecasts, impact probabilities and assessment ratings, such as the Torino and Palermo scales. These high-risk NEO assessments involve several criteria, including impact energy, mass, and absolute magnitude. The main objective of this paper is to provide the first Multi-Criteria Decision Making (MCDM) approach to classify hazardous NEOs. Our approach applies a combination of two methods from a widely utilized decision making theory. Specifically, the Analytic Hierarchy Process (AHP) methodology is employed to determine the criteria weights, which influence the decision making, and the Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) is used to obtain a ranking of alternatives (potentially hazardous NEOs). In addition, NEO datasets provided by the NASA Near-Earth Object Program are utilized. This approach allows the classification of NEOs by descending order of their TOPSIS ratio, a single quantity that contains all of the relevant information for each object.
NASA Astrophysics Data System (ADS)
Abedi, S.; Mashhadian, M.; Noshadravan, A.
2015-12-01
Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.
A critical review of nanotechnologies for composite aerospace structures
NASA Astrophysics Data System (ADS)
Kostopoulos, Vassilis; Masouras, Athanasios; Baltopoulos, Athanasios; Vavouliotis, Antonios; Sotiriadis, George; Pambaguian, Laurent
2017-03-01
The past decade extensive efforts have been invested in understanding the nano-scale and revealing the capabilities offered by nanotechnology products to structural materials. Integration of nano-particles into fiber composites concludes to multi-scale reinforced composites and has opened a new wide range of multi-functional materials in industry. In this direction, a variety of carbon based nano-fillers has been proposed and employed, individually or in combination in hybrid forms, to approach the desired performance. Nevertheless, a major issue faced lately more seriously due to the interest of industry is on how to incorporate these nano-species into the final composite structure through existing manufacturing processes and infrastructure. This interest originates from several industrial applications needs that request the development of new multi-functional materials which combine enhanced mechanical, electrical and thermal properties. In this work, an attempt is performed to review the most representative processes and related performances reported in literature and the experience obtained on nano-enabling technologies of fiber composite materials. This review focuses on the two main composite manufacturing technologies used by the aerospace industry; Prepreg/Autoclave and Resin Transfer technologies. It addresses several approaches for nano-enabling of composites for these two routes and reports latest achieved results focusing on performance of nano-enabled fiber reinforced composites extracted from literature. Finally, this review work identifies the gap between available nano-technology integration routes and the established industrial composite manufacturing techniques and the challenges to increase the Technology Readiness Level to reach the demands for aerospace industry applications.
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less