An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
Heterogeneous Distributed Computing for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy S.
1998-01-01
The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures
Zavodszky, Maria I.
2017-01-01
Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumann, K; Weber, U; Simeonov, Y
2015-06-15
Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in amore » typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.« less
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.
2015-01-01
We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.
Measuring the effects of heterogeneity on distributed systems
NASA Technical Reports Server (NTRS)
El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi
1991-01-01
Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
ERIC Educational Resources Information Center
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
Thermal Coefficient of Linear Expansion Modified by Dendritic Segregation in Nickel-Iron Alloys
NASA Astrophysics Data System (ADS)
Ogorodnikova, O. M.; Maksimova, E. V.
2018-05-01
The paper presents investigations of thermal properties of Fe-Ni and Fe-Ni-Co casting alloys affected by the heterogeneous distribution of their chemical elements. It is shown that nickel dendritic segregation has a negative effect on properties of studied invars. A mathematical model is proposed to explore the influence of nickel dendritic segregation on the thermal coefficient of linear expansion (TCLE) of the alloy. A computer simulation of TCLE of Fe-Ni-Co superinvars is performed with regard to a heterogeneous distribution of their chemical elements over the whole volume. The ProLigSol computer software application is developed for processing the data array and results of computer simulation.
Heterogeneous Systems for Information-Variable Environments (HIVE)
2017-05-01
ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information - Variable Environments (HIVE) by Amar...not return it to the originator. ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information ...Computational and Information Sciences Directorate, ARL Approved for public release; distribution is unlimited. ii REPORT
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
Regional gas transport in the heterogeneous lung during oscillatory ventilation
Herrmann, Jacob; Tawhai, Merryn H.
2016-01-01
Regional ventilation in the injured lung is heterogeneous and frequency dependent, making it difficult to predict how an oscillatory flow waveform at a specified frequency will be distributed throughout the periphery. To predict the impact of mechanical heterogeneity on regional ventilation distribution and gas transport, we developed a computational model of distributed gas flow and CO2 elimination during oscillatory ventilation from 0.1 to 30 Hz. The model consists of a three-dimensional airway network of a canine lung, with heterogeneous parenchymal tissues to mimic effects of gravity and injury. Model CO2 elimination during single frequency oscillation was validated against previously published experimental data (Venegas JG, Hales CA, Strieder DJ, J Appl Physiol 60: 1025–1030, 1986). Simulations of gas transport demonstrated a critical transition in flow distribution at the resonant frequency, where the reactive components of mechanical impedance due to airway inertia and parenchymal elastance were equal. For frequencies above resonance, the distribution of ventilation became spatially clustered and frequency dependent. These results highlight the importance of oscillatory frequency in managing the regional distribution of ventilation and gas exchange in the heterogeneous lung. PMID:27763872
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Campus-Wide Computing: Early Results Using Legion at the University of Virginia
2006-01-01
Bernard et al., “Primitives for Distributed Computing in a Heterogeneous Local Area Network Environ- ment”, IEEE Trans on Soft. Eng. vol. 15, no. 12...1994. [16] F. Ferstl, “CODINE Technical Overview,” Genias, April, 1993. [17] R. F. Freund and D. S. Cornwell , “Superconcurrency: A form of distributed
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
NASA Astrophysics Data System (ADS)
Liu, Peng; Ju, Yang; Gao, Feng; Ranjith, Pathegama G.; Zhang, Qianbing
2018-03-01
Understanding and characterization of the three-dimensional (3-D) propagation and distribution of hydrofracturing cracks in heterogeneous rock are key for enhancing the stimulation of low-permeability petroleum reservoirs. In this study, we investigated the propagation and distribution characteristics of hydrofracturing cracks, by conducting true triaxial hydrofracturing tests and computed tomography on artificial heterogeneous rock specimens. Silica sand, Portland cement, and aedelforsite were mixed to create artificial heterogeneous rock specimens using the data of mineral compositions, coarse gravel distribution, and mechanical properties that were measured from the natural heterogeneous glutenite cores. To probe the effects of material heterogeneity on hydrofracturing cracks, the artificial homogenous specimens were created using the identical matrix compositions of the heterogeneous rock specimens and then fractured for comparison. The effects of horizontal geostress ratio on the 3-D growth and distribution of cracks during hydrofracturing were examined. A fractal-based method was proposed to characterize the complexity of fractures and the efficiency of hydrofracturing stimulation of heterogeneous media. The material heterogeneity and horizontal geostress ratio were found to significantly influence the 3-D morphology, growth, and distribution of hydrofracturing cracks. A horizontal geostress ratio of 1.7 appears to be the upper limit for the occurrence of multiple cracks, and higher ratios cause a single crack perpendicular to the minimum horizontal geostress component. The fracturing efficiency is associated with not only the fractured volume but also the complexity of the crack network.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Dome: Distributed Object Migration Environment
1994-05-01
Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the
Theoretical foundation for measuring the groundwater age distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, William Payton; Arnold, Bill Walter
2014-01-01
In this study, we use PFLOTRAN, a highly scalable, parallel, flow and reactive transport code to simulate the concentrations of 3H, 3He, CFC-11, CFC-12, CFC-113, SF6, 39Ar, 81Kr, 4He and themean groundwater age in heterogeneous fields on grids with an excess of 10 million nodes. We utilize this computational platform to simulate the concentration of multiple tracers in high-resolution, heterogeneous 2-D and 3-D domains, and calculate tracer-derived ages. Tracer-derived ages show systematic biases toward younger ages when the groundwater age distribution contains water older than the maximum tracer age. The deviation of the tracer-derived age distribution from the true groundwatermore » age distribution increases with increasing heterogeneity of the system. However, the effect of heterogeneity is diminished as the mean travel time gets closer the tracer age limit. Age distributions in 3-D domains differ significantly from 2-D domains. 3D simulations show decreased mean age, and less variance in age distribution for identical heterogeneity statistics. High-performance computing allows for investigation of tracer and groundwater age systematics in high-resolution domains, providing a platform for understanding and utilizing environmental tracer and groundwater age information in heterogeneous 3-D systems. Groundwater environmental tracers can provide important constraints for the calibration of groundwater flow models. Direct simulation of environmental tracer concentrations in models has the additional advantage of avoiding assumptions associated with using calculated groundwater age values. This study quantifies model uncertainty reduction resulting from the addition of environmental tracer concentration data. The analysis uses a synthetic heterogeneous aquifer and the calibration of a flow and transport model using the pilot point method. Results indicate a significant reduction in the uncertainty in permeability with the addition of environmental tracer data, relative to the use of hydraulic measurements alone. Anthropogenic tracers and their decay products, such as CFC11, 3H, and 3He, provide significant constraint oninput permeability values in the model. Tracer data for 39Ar provide even more complete information on the heterogeneity of permeability and variability in the flow system than the anthropogenic tracers, leading to greater parameter uncertainty reduction.« less
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Regional gas transport in the heterogeneous lung during oscillatory ventilation.
Herrmann, Jacob; Tawhai, Merryn H; Kaczka, David W
2016-12-01
Regional ventilation in the injured lung is heterogeneous and frequency dependent, making it difficult to predict how an oscillatory flow waveform at a specified frequency will be distributed throughout the periphery. To predict the impact of mechanical heterogeneity on regional ventilation distribution and gas transport, we developed a computational model of distributed gas flow and CO 2 elimination during oscillatory ventilation from 0.1 to 30 Hz. The model consists of a three-dimensional airway network of a canine lung, with heterogeneous parenchymal tissues to mimic effects of gravity and injury. Model CO 2 elimination during single frequency oscillation was validated against previously published experimental data (Venegas JG, Hales CA, Strieder DJ, J Appl Physiol 60: 1025-1030, 1986). Simulations of gas transport demonstrated a critical transition in flow distribution at the resonant frequency, where the reactive components of mechanical impedance due to airway inertia and parenchymal elastance were equal. For frequencies above resonance, the distribution of ventilation became spatially clustered and frequency dependent. These results highlight the importance of oscillatory frequency in managing the regional distribution of ventilation and gas exchange in the heterogeneous lung. Copyright © 2016 the American Physiological Society.
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Negrut, Dan; Lamb, David; Gorsich, David
2011-06-01
This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Modeling and comparative study of fluid velocities in heterogeneous rocks
NASA Astrophysics Data System (ADS)
Hingerl, Ferdinand F.; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally
2013-04-01
Detailed knowledge of the distribution of effective porosity and fluid velocities in heterogeneous rock samples is crucial for understanding and predicting spatially resolved fluid residence times and kinetic reaction rates of fluid-rock interactions. The applicability of conventional MRI techniques to sedimentary rocks is limited by internal magnetic field gradients and short spin relaxation times. The approach developed at the UNB MRI Centre combines the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme and three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE). These methods were designed to reduce the errors due to effects of background gradients and fast transverse relaxation. SPRITE is largely immune to time-evolution effects resulting from background gradients, paramagnetic impurities and chemical shift. Using these techniques quantitative 3D porosity maps as well as single-phase fluid velocity fields in sandstone core samples were measured. Using a new Magnetic Resonance Imaging technique developed at the MRI Centre at UNB, we created 3D maps of porosity distributions as well as single-phase fluid velocity distributions of sandstone rock samples. Then, we evaluated the applicability of the Kozeny-Carman relationship for modeling measured fluid velocity distributions in sandstones samples showing meso-scale heterogeneities using two different modeling approaches. The MRI maps were used as reference points for the modeling approaches. For the first modeling approach, we applied the Kozeny-Carman relationship to the porosity distributions and computed respective permeability maps, which in turn provided input for a CFD simulation - using the Stanford CFD code GPRS - to compute averaged velocity maps. The latter were then compared to the measured velocity maps. For the second approach, the measured velocity distributions were used as input for inversely computing permeabilities using the GPRS CFD code. The computed permeabilities were then correlated with the ones based on the porosity maps and the Kozeny-Carman relationship. The findings of the comparative modeling study are discussed and its potential impact on the modeling of fluid residence times and kinetic reaction rates of fluid-rock interactions in rocks containing meso-scale heterogeneities are reviewed.
Dunlop, R; Arbona, A; Rajasekaran, H; Lo Iacono, L; Fingberg, J; Summers, P; Benkner, S; Engelbrecht, G; Chiarini, A; Friedrich, C M; Moore, B; Bijlenga, P; Iavindrasana, J; Hose, R D; Frangi, A F
2008-01-01
This paper presents an overview of computerised decision support for clinical practice. The concept of computer-interpretable guidelines is introduced in the context of the @neurIST project, which aims at supporting the research and treatment of asymptomatic unruptured cerebral aneurysms by bringing together heterogeneous data, computing and complex processing services. The architecture is generic enough to adapt it to the treatment of other diseases beyond cerebral aneurysms. The paper reviews the generic requirements of the @neurIST system and presents the innovative work in distributing executable clinical guidelines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Zhifeng; Liu, Chongxuan; Liu, Yuanyuan
Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models, and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical propertiesmore » was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.« less
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu
2017-04-05
Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.
NASA Astrophysics Data System (ADS)
Yan, Zhifeng; Liu, Chongxuan; Liu, Yuanyuan; Bailey, Vanessa L.
2017-11-01
Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical properties was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
DAI-CLIPS: Distributed, Asynchronous, Interacting CLIPS
NASA Technical Reports Server (NTRS)
Gagne, Denis; Garant, Alain
1994-01-01
DAI-CLIPS is a distributed computational environment within which each CLIPS is an active independent computational entity with the ability to communicate freely with other CLIPS. Furthermore, new CLIPS can be created, others can be deleted or modify their expertise, all dynamically in an asynchronous and independent fashion during execution. The participating CLIPS are distributed over a network of heterogeneous processors taking full advantage of the available processing power. We present the general framework encompassing DAI-CLIPS and discuss some of its advantages and potential applications.
2015-12-04
51 6.6 Power Consumption: Communications ...simulations executing on mobile computing platforms, an area not widely studied to date in the distributed simulation research community . A...simulation community . These initial studies focused on two conservative synchronization algorithms widely used in the distributed simulation field
Burrowes, Kelly S; Hunter, Peter J; Tawhai, Merryn H
2005-11-01
A computational model of blood flow through the human pulmonary arterial tree has been developed to investigate the relative influence of branching structure and gravity on blood flow distribution in the human lung. Geometric models of the largest arterial vessels and lobar boundaries were first derived using multidetector row x-ray computed tomography (MDCT) scans. Further accompanying arterial vessels were generated from the MDCT vessel endpoints into the lobar volumes using a volume-filling branching algorithm. Equations governing the conservation of mass and momentum were solved within the geometric model to calculate pressure, velocity, and vessel radius. Blood flow results in the anatomically based model, with and without gravity, and in a symmetric geometric model were compared to investigate their relative contributions to blood flow heterogeneity. Results showed a persistent blood flow gradient and flow heterogeneity in the absence of gravitational forces in the anatomically based model. Comparison with flow results in the symmetric model revealed that the asymmetric vascular branching structure was largely responsible for producing this heterogeneity. Analysis of average results in varying slice thicknesses illustrated a clear flow gradient because of gravity in "lower resolution" data (thicker slices), but on examination of higher resolution data, a trend was less obvious. Results suggest that although gravity does influence flow distribution, the influence of the tree branching structure is also a dominant factor. These results are consistent with high-resolution experimental studies that have demonstrated gravity to be only a minor determinant of blood flow distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2007-01-09
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Analytical effective tensor for flow-through composites
Sviercoski, Rosangela De Fatima [Los Alamos, NM
2012-06-19
A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.
2011-08-09
fastest 10 supercomputers in the world. Both systems rely on GPU co-processing, one using AMD cards, the second, called Nebulae , using NVIDIA Tesla...Page 9 of 10 UNCLASSIFIED capability of almost 3 petaflop/s, the highest in TOP500, Nebulae only holds the No. 2 position on the TOP500 list of the
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
Hussein, Esam M A; Agbogun, H M D; Al, Tom A
2015-03-01
A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. Copyright © 2014 Elsevier Ltd. All rights reserved.
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.
Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M
2015-11-01
Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.
McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco
2016-12-08
Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.
Constructing Scientific Applications from Heterogeneous Resources
NASA Technical Reports Server (NTRS)
Schichting, Richard D.
1995-01-01
A new model for high-performance scientific applications in which such applications are implemented as heterogeneous distributed programs or, equivalently, meta-computations, is investigated. The specific focus of this grant was a collaborative effort with researchers at NASA and the University of Toledo to test and improve Schooner, a software interconnection system, and to explore the benefits of increased user interaction with existing scientific applications.
NASA Astrophysics Data System (ADS)
Mahabadi, O. K.; Tatone, B. S. A.; Grasselli, G.
2014-07-01
This study investigates the influence of microscale heterogeneity and microcracks on the failure behavior and mechanical response of a crystalline rock. The thin section analysis for obtaining the microcrack density is presented. Using micro X-ray computed tomography (μCT) scanning of failed laboratory specimens, the influence of heterogeneity and, in particular, biotite grains on the brittle fracture of the specimens is discussed and various failure patterns are characterized. Three groups of numerical simulations are presented, which demonstrate the role of microcracks and the influence of μCT-based and stochastically generated phase distributions. The mechanical response, stress distribution, and fracturing process obtained by the numerical simulations are also discussed. The simulation results illustrate that heterogeneity and microcracks should be considered to accurately predict the tensile strength and failure behavior of the sample.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
Superconcurrency: A Form of Distributed Heterogeneous Supercomputing
1991-05-01
and Nathaniel J. Davis IV, An Overview of the PASM Parallel Processing System, in Computer Architecture, edited by D. D. Gajski , V. M. Milutinovic, H...nianag- concurrency Research Team has been rarena in the next few months, iag optinmalyconfigured sutes of the development of the Distributed e- g ., an
Job Superscheduler Architecture and Performance in Computational Grid Environments
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak
2003-01-01
Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Automated inverse computer modeling of borehole flow data in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Sawdey, J. R.; Reeve, A. S.
2012-09-01
A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.
ERIC Educational Resources Information Center
Anderson, Greg; And Others
1996-01-01
Describes the Computer Science Technical Report Project, one of the earliest investigations into the system engineering of digital libraries which pioneered multiinstitutional collaborative research into technical, social, and legal issues related to the development and implementation of a large, heterogeneous, distributed digital library. (LRW)
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porras-Chaverri, M; University of Costa Rica, San Jose; Galavis, P
Purpose: Evaluate mammographic mean glandular dose (MGD) coefficients for particular known tissue distributions using a novel formalism that incorporates the effect of the heterogeneous glandular tissue distribution, by comparing them with MGD coefficients derived from the corresponding anthropomorphic computer breast phantom. Methods: MGD coefficients were obtained using MCNP5 simulations with the currently used homogeneous assumption and the heterogeneously-layered breast (HLB) geometry and compared against those from the computer phantom (ground truth). The tissue distribution for the HLB geometry was estimated using glandularity map image pairs corrected for the presence of non-glandular fibrous tissue. Heterogeneity of tissue distribution was quantified usingmore » the glandular tissue distribution index, Idist. The phantom had 5 cm compressed breast thickness (MLO and CC views) and 29% whole breast glandular percentage. Results: Differences as high as 116% were found between the MGD coefficients with the homogeneous breast core assumption and those from the corresponding ground truth. Higher differences were found for cases with more heterogeneous distribution of glandular tissue. The Idist for all cases was in the [−0.8{sup −}+0.3] range. The use of the methods presented in this work results in better agreement with ground truth with an improvement as high as 105 pp. The decrease in difference across all phantom cases was in the [9{sup −}105] pp range, dependent on the distribution of glandular tissue and was larger for the cases with the highest Idist values. Conclusion: Our results suggest that the use of corrected glandularity image pairs, as well as the HLB geometry, improves the estimates of MGD conversion coefficients by accounting for the distribution of glandular tissue within the breast. The accuracy of this approach with respect to ground truth is highly dependent on the particular glandular tissue distribution studied. Predrag Bakic discloses current funding from NIH, NSF, and DoD, former funding from Real Time Tomography, LLC and a current research collaboration with Barco and Hologic.« less
Guo, Xuesong; Zhou, Xin; Chen, Qiuwen; Liu, Junxin
2013-04-01
In the Orbal oxidation ditch, denitrification is primarily accomplished in the outer channel. However, the detailed characteristics of the flow field and dissolved oxygen (DO) distribution in the outer channel are not well understood. Therefore, in this study, the flow velocity and DO concentration in the outer channel of an Orbal oxidation ditch system in a wastewater treatment plant in Beijing (China) were monitored under actual operation conditions. The flow field and DO concentration distributions were analyzed by computed fluid dynamic modeling. In situ monitoring and modeling both showed that the flow velocity was heterogeneous in the outer channel. As a result, the DO was also heterogeneously distributed in the outer channel, with concentration gradients occurring along the flow direction as well as in the cross-section. This heterogeneous DO distribution created many anoxic and aerobic zones, which may have facilitated simultaneous nitrification-denitrification in the channel. These findings may provide supporting information for rational optimization of the performance of the Orbal oxidation ditch.
Faithful qubit transmission in a quantum communication network with heterogeneous channels
NASA Astrophysics Data System (ADS)
Chen, Na; Zhang, Lin Xi; Pei, Chang Xing
2018-04-01
Quantum communication networks enable long-distance qubit transmission and distributed quantum computation. In this paper, a quantum communication network with heterogeneous quantum channels is constructed. A faithful qubit transmission scheme is presented. Detailed calculations and performance analyses show that even in a low-quality quantum channel with serious decoherence, only modest number of locally prepared target qubits are required to achieve near-deterministic qubit transmission.
Mi, Shichao; Han, Hui; Chen, Cailian; Yan, Jian; Guan, Xinping
2016-02-19
Heterogeneous wireless sensor networks (HWSNs) can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs) have more computation power, while relay nodes (RNs) with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS) to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.
GANGA: A tool for computational-task management and easy access to Grid resources
NASA Astrophysics Data System (ADS)
Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.
2009-11-01
In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:
Towards an Approach of Semantic Access Control for Cloud Computing
NASA Astrophysics Data System (ADS)
Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai
With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.
Paralex: An Environment for Parallel Programming in Distributed Systems
1991-12-07
distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks
NASA Astrophysics Data System (ADS)
Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar
2016-06-01
Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.
Magdoom, Kulam Najmudeen; Pishko, Gregory L.; Rice, Lori; Pampo, Chris; Siemann, Dietmar W.; Sarntinoranont, Malisa
2014-01-01
Systemic drug delivery to solid tumors involving macromolecular therapeutic agents is challenging for many reasons. Amongst them is their chaotic microvasculature which often leads to inadequate and uneven uptake of the drug. Localized drug delivery can circumvent such obstacles and convection-enhanced delivery (CED) - controlled infusion of the drug directly into the tissue - has emerged as a promising delivery method for distributing macromolecules over larger tissue volumes. In this study, a three-dimensional MR image-based computational porous media transport model accounting for realistic anatomical geometry and tumor leakiness was developed for predicting the interstitial flow field and distribution of albumin tracer following CED into the hind-limb tumor (KHT sarcoma) in a mouse. Sensitivity of the model to changes in infusion flow rate, catheter placement and tissue hydraulic conductivity were investigated. The model predictions suggest that 1) tracer distribution is asymmetric due to heterogeneous porosity; 2) tracer distribution volume varies linearly with infusion volume within the whole leg, and exponentially within the tumor reaching a maximum steady-state value; 3) infusion at the center of the tumor with high flow rates leads to maximum tracer coverage in the tumor with minimal leakage outside; and 4) increasing the tissue hydraulic conductivity lowers the tumor interstitial fluid pressure and decreases the tracer distribution volume within the whole leg and tumor. The model thus predicts that the interstitial fluid flow and drug transport is sensitive to porosity and changes in extracellular space. This image-based model thus serves as a potential tool for exploring the effects of transport heterogeneity in tumors. PMID:24619021
An approach for drag correction based on the local heterogeneity for gas-solid flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Wang, Limin; Rogers, William
2016-09-22
The drag models typically used for gas-solids interaction are mainly developed based on homogeneous systems of flow passing fixed particle assembly. It has been shown that the heterogeneous structures, i.e., clusters and bubbles in fluidized beds, need to be resolved to account for their effect in the numerical simulations. Since the heterogeneity is essentially captured through the local concentration gradient in the computational cells, this study proposes a simple approach to account for the non-uniformity of solids spatial distribution inside a computational cell and its effect on the interaction between gas and solid phases. Finally, to validate this approach, themore » predicted drag coefficient has been compared to the results from direct numerical simulations. In addition, the need to account for this type of heterogeneity is discussed for a periodic riser flow simulation with highly resolved numerical grids and the impact of the proposed correction for drag is demonstrated.« less
HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica
Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.
Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven
2010-11-01
The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Open Source Live Distributions for Computer Forensics
NASA Astrophysics Data System (ADS)
Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele
Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
NASA Astrophysics Data System (ADS)
Noda, H.; Lapusta, N.; Kanamori, H.
2010-12-01
Static stress drop is often estimated using the seismic moment and rupture area based on a model for uniform stress drop distribution; we denote this estimate by Δσ_M. Δσ_M is sometimes interpreted as the spatial average of stress change over the ruptured area, denoted here as Δσ_A, and used accordingly, for example, to discuss the relation between recurrence interval and the healing of the frictional surface in a system with one degree of freedom [e.g., Marone, 1998]. Δσ_M is also used to estimate available energy (defined as the strain energy change computed using the final stress state as the reference one) and radiation efficiency [e.g., Venkataraman and Kanamori, 2004]. In this work, we define a stress drop measure, Δσ_E, that would enter the exact computation of available energy and radiation efficiency. The three stress drop measures - Δσ_M that can be estimated from observations, Δσ_A, and Δσ_E - are equal if the static stress change is spatially uniform, and that motivates substituting Δσ_M for the other two quantities in applications. However, finite source inversions suggest that the stress change is heterogeneous in natural earthquakes [e.g., Bouchon, 1997]. Since Δσ_M is the average of stress change weighted by slip distribution due to a uniform stress drop [Madariaga, 1979], Δσ_E is the average of stress change weighted by actual slip distribution in the event (this work), and Δσ_A is the simple spatial average of stress change, the three measures should, in general, be different. Here, we investigate the effect of heterogeneity aiming to understand how to use the seismological estimates of stress drop appropriately. We create heterogeneous slip distributions for both circular and rectangular planar ruptures using the approach motivated by Liu-Zeng et al. [2005] and Lavalleé et al [2005]. We find that, indeed, the three stress drop measures differ in our scenarios. In particular, heterogeneity increases Δσ_E and thus the available energy when the seismic moment (and hence Δσ_M) is preserved. So using Δσ_M instead of Δσ_E would underestimate available energy and hence overestimate radiation efficiency. For a range of parameters, Δσ_E is well-approximated by the seismic estimate Δσ_M if the latter is computed using a modified (decreased) rupture area that excludes low-slipped regions; a qualitatively similar procedure is already being used in practice [Somerville et al, 1999].
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Klimentov, A.; Buncic, P.; De, K.; ...
2015-05-22
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimentov, A.; Buncic, P.; De, K.
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Additional Security Considerations for Grid Management
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.
2003-01-01
The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.
A Domain-Specific Language for Aviation Domain Interoperability
ERIC Educational Resources Information Center
Comitz, Paul
2013-01-01
Modern information systems require a flexible, scalable, and upgradeable infrastructure that allows communication and collaboration between heterogeneous information processing and computing environments. Aviation systems from different organizations often use differing representations and distribution policies for the same data and messages,…
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
DNET: A communications facility for distributed heterogeneous computing
NASA Technical Reports Server (NTRS)
Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.
1989-01-01
This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.
NASA Astrophysics Data System (ADS)
DeBeer, Chris M.; Pomeroy, John W.
2017-10-01
The spatial heterogeneity of mountain snow cover and ablation is important in controlling patterns of snow cover depletion (SCD), meltwater production, and runoff, yet is not well-represented in most large-scale hydrological models and land surface schemes. Analyses were conducted in this study to examine the influence of various representations of snow cover and melt energy heterogeneity on both simulated SCD and stream discharge from a small alpine basin in the Canadian Rocky Mountains. Simulations were performed using the Cold Regions Hydrological Model (CRHM), where point-scale snowmelt computations were made using a snowpack energy balance formulation and applied to spatial frequency distributions of snow water equivalent (SWE) on individual slope-, aspect-, and landcover-based hydrological response units (HRUs) in the basin. Hydrological routines were added to represent the vertical and lateral transfers of water through the basin and channel system. From previous studies it is understood that the heterogeneity of late winter SWE is a primary control on patterns of SCD. The analyses here showed that spatial variation in applied melt energy, mainly due to differences in net radiation, has an important influence on SCD at multiple scales and basin discharge, and cannot be neglected without serious error in the prediction of these variables. A single basin SWE distribution using the basin-wide mean SWE (SWE ‾) and coefficient of variation (CV; standard deviation/mean) was found to represent the fine-scale spatial heterogeneity of SWE sufficiently well. Simulations that accounted for differences in (SWE ‾) among HRUs but neglected the sub-HRU heterogeneity of SWE were found to yield similar discharge results as simulations that included this heterogeneity, while SCD was poorly represented, even at the basin level. Finally, applying point-scale snowmelt computations based on a single SWE depth for each HRU (thereby neglecting spatial differences in internal snowpack energetics over the distributions) was found to yield similar SCD and discharge results as simulations that resolved internal energy differences. Spatial/internal snowpack melt energy effects are more pronounced at times earlier in spring before the main period of snowmelt and SCD, as shown in previously published work. The paper discusses the importance of these findings as they apply to the warranted complexity of snowmelt process simulation in cold mountain environments, and shows how the end-of-winter SWE distribution represents an effective means of resolving snow cover heterogeneity at multiple scales for modelling, even in steep and complex terrain.
Distributed MRI reconstruction using Gadgetron-based cloud computing.
Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S
2015-03-01
To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
Explorative search of distributed bio-data to answer complex biomedical questions
2014-01-01
Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
A hydrological emulator for global applications - HE v1.0.0
NASA Astrophysics Data System (ADS)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong
2018-03-01
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
Transformation of OODT CAS to Perform Larger Tasks
NASA Technical Reports Server (NTRS)
Mattmann, Chris; Freeborn, Dana; Crichton, Daniel; Hughes, John; Ramirez, Paul; Hardman, Sean; Woollard, David; Kelly, Sean
2008-01-01
A computer program denoted OODT CAS has been transformed to enable performance of larger tasks that involve greatly increased data volumes and increasingly intensive processing of data on heterogeneous, geographically dispersed computers. Prior to the transformation, OODT CAS (also alternatively denoted, simply, 'CAS') [wherein 'OODT' signifies 'Object-Oriented Data Technology' and 'CAS' signifies 'Catalog and Archive Service'] was a proven software component used to manage scientific data from spaceflight missions. In the transformation, CAS was split into two separate components representing its canonical capabilities: file management and workflow management. In addition, CAS was augmented by addition of a resource-management component. This third component enables CAS to manage heterogeneous computing by use of diverse resources, including high-performance clusters of computers, commodity computing hardware, and grid computing infrastructures. CAS is now more easily maintainable, evolvable, and reusable. These components can be used separately or, taking advantage of synergies, can be used together. Other elements of the transformation included addition of a separate Web presentation layer that supports distribution of data products via Really Simple Syndication (RSS) feeds, and provision for full Resource Description Framework (RDF) exports of metadata.
Heterogeneous database integration in biomedicine.
Sujansky, W
2001-08-01
The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.
Folding mechanism of β-hairpin trpzip2: heterogeneity, transition state and folding pathways.
Xiao, Yi; Chen, Changjun; He, Yi
2009-06-22
We review the studies on the folding mechanism of the beta-hairpin tryptophan zipper 2 (trpzip2) and present some additional computational results to refine the picture of folding heterogeneity and pathways. We show that trpzip2 can have a two-state or a multi-state folding pattern, depending on whether it folds within the native basin or through local state basins on the high-dimensional free energy surface; Trpzip2 can fold along different pathways according to the packing order of tryptophan pairs. We also point out some important problems related to the folding mechanism of trpzip2 that still need clarification, e.g., a wide distribution of the computed conformations for the transition state ensemble.
NASA Astrophysics Data System (ADS)
Toramatsu, Chie; Inaniwa, Taku
2016-12-01
In charged particle therapy with pencil beam scanning (PBS), localization of the dose in the Bragg peak makes dose distributions sensitive to lateral tissue heterogeneities. The sensitivity of a PBS plan to lateral tissue heterogeneities can be reduced by selecting appropriate beam angles. The purpose of this study is to develop a fast and accurate method of beam angle selection for PBS. The lateral tissue heterogeneity surrounding the path of the pencil beams at a given angle was quantified with the heterogeneity number representing the variation of the Bragg peak depth across the cross section of the beams using the stopping power ratio of body tissues with respect to water. To shorten the computation time, one-dimensional dose optimization was conducted along the central axis of the pencil beams as they were directed by the scanning magnets. The heterogeneity numbers were derived for all possible beam angles for treatment. The angles leading to the minimum mean heterogeneity number were selected as the optimal beam angle. Three clinical cases of head and neck cancer were used to evaluate the developed method. Dose distributions and their robustness to setup and range errors were evaluated for all tested angles, and their relation to the heterogeneity numbers was investigated. The mean heterogeneity number varied from 1.2 mm-10.6 mm in the evaluated cases. By selecting a field with a low mean heterogeneity number, target dose coverage and robustness against setup and range errors were improved. The developed method is simple, fast, accurate and applicable for beam angle selection in charged particle therapy with PBS.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Takayama, T.; Fujii, T.
2016-12-01
We will present possible heterogeneity of pore-water salinity within methane hydrate reservoir of Daini-Atsumi knoll, on the basis of Logging-while-drilling (LWD) data and several kind of wire-line logging dataset. The LWD and the wire-line logging had been carried out during 2012 to 2013, before/after the first offshore gas-production-test from marine-methane-hydrate reservoir at Daini-Atsumi Knoll along the northeast Nankai trough. Several data from the logging, especially data from the reservoir saturation tool; RST, gave us some possible interpretation for heterogeneity distribution of chlorinity within the methane-hydrate reservoir. The computed pore-water chlorinity could be interpreted as condense of chlorinity at gas-hydrate formation. This year, we drilled several number of wells at Daini-Atsumi Knoll, again for next gas production test, and we have also found out possibility of chlorinity heterogeneity from LWD data of Neutron-capture cross section; i.e. Sigma. The distribution of chlorinity within gas-hydrate reservoir may help our understanding of gas hydrate-crystallization and/or dissociation in turbidite reservoir at Daini-Atsumi Knoll. This research is conducted as a part of the Research Consortium for Methane Hydrate Resource in Japan (MH21 Research consortium).
Performance of a Heterogeneous Grid Partitioner for N-body Applications
NASA Technical Reports Server (NTRS)
Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak
2003-01-01
An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.
Overview of the LINCS architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.; Watson, R.W.
1982-01-13
Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less
Distributed computing for membrane-based modeling of action potential propagation.
Porras, D; Rogers, J M; Smith, W M; Pollard, A E
2000-08-01
Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valous, Nektarios A.; Lahrmann, Bernd; Halama, Niels
Purpose: The interactions of neoplastic cells with each other and the microenvironment are complex. To understand intratumoral heterogeneity, subtle differences should be quantified. Main factors contributing to heterogeneity include the gradient ischemic level within neoplasms, action of microenvironment, mechanisms of intercellular transfer of genetic information, and differential mechanisms of modifications of genetic material/proteins. This may reflect on the expression of biomarkers in the context of prognosis/stratification. Hence, a rigorous approach for assessing the spatial intratumoral heterogeneity of histological biomarker expression with accuracy and reproducibility is required, since patterns in immunohistochemical images can be challenging to identify and describe. Methods: Amore » quantitative method that is useful for characterizing complex irregular structures is lacunarity; it is a multiscale technique that exhaustively samples the image, while the decay of its index as a function of window size follows characteristic patterns for different spatial arrangements. In histological images, lacunarity provides a useful measure for the spatial organization of a biomarker when a sampling scheme is employed and relevant features are computed. The proposed approach quantifies the segmented proliferative cells and not the textural content of the histological slide, thus providing a more realistic measure of heterogeneity within the sample space of the tumor region. The aim is to investigate in whole sections of primary pancreatic neuroendocrine neoplasms (pNENs), using whole-slide imaging and image analysis, the spatial intratumoral heterogeneity of Ki-67 immunostains. Unsupervised learning is employed to verify that the approach can partition the tissue sections according to distributional heterogeneity. Results: The architectural complexity of histological images has shown that single measurements are often insufficient. Inhomogeneity of distribution depends not only on percentage content of proliferation phase but also on how the phase fills the space. Lacunarity curves demonstrate variations in the sampled image sections. Since the spatial distribution of proliferation in each case is different, the width of the curves changes too. Image sections that have smaller numerical variations in the computed features correspond to neoplasms with spatially homogeneous proliferation, while larger variations correspond to cases where proliferation shows various degrees of clumping. Grade 1 (uniform/nonuniform: 74%/26%) and grade 3 (uniform: 100%) pNENs demonstrate a more homogeneous proliferation with grade 1 neoplasms being more variant, while grade 2 tumor regions render a more diverse landscape (50%/50%). Hence, some cases show an increased degree of spatial heterogeneity comparing to others with similar grade. Whether this is a sign of different tumor biology and an association with a more benign/malignant clinical course needs to be investigated further. The extent and range of spatial heterogeneity has the potential to be evaluated as a prognostic marker. Conclusions: The association with tumor grade as well as the rationale that the methodology reflects true tumor architecture supports the technical soundness of the method. This reflects a general approach which is relevant to other solid tumors and biomarkers. Drawing upon the merits of computational biomedicine, the approach uncovers salient features for use in future studies of clinical relevance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
Xia, Jun; Huang, Chao; Maslov, Konstantin; Anastasio, Mark A; Wang, Lihong V
2013-08-15
Photoacoustic computed tomography (PACT) is a hybrid technique that combines optical excitation and ultrasonic detection to provide high-resolution images in deep tissues. In the image reconstruction, a constant speed of sound (SOS) is normally assumed. This assumption, however, is often not strictly satisfied in deep tissue imaging, due to acoustic heterogeneities within the object and between the object and the coupling medium. If these heterogeneities are not accounted for, they will cause distortions and artifacts in the reconstructed images. In this Letter, we incorporated ultrasonic computed tomography (USCT), which measures the SOS distribution within the object, into our full-ring array PACT system. Without the need for ultrasonic transmitting electronics, USCT was performed using the same laser beam as for PACT measurement. By scanning the laser beam on the array surface, we can sequentially fire different elements. As a first demonstration of the system, we studied the effect of acoustic heterogeneities on photoacoustic vascular imaging. We verified that constant SOS is a reasonable approximation when the SOS variation is small. When the variation is large, distortion will be observed in the periphery of the object, especially in the tangential direction.
NASA Astrophysics Data System (ADS)
Wang, Y.; Pavlis, G. L.; Li, M.
2017-12-01
The amount of water in the Earth's deep mantle is critical for the evolution of the solid Earth and the atmosphere. Mineral physics studies have revealed that Wadsleyite and Ringwoodite in the mantle transition zone could store several times the volume of water in the ocean. However, the water content and its distribution in the transition zone remain enigmatic due to lack of direct observations. Here we use seismic data from the full deployment of the Earthscope Transportable Array to produce 3D image of P to S scattering of the mantle transition zone beneath the United States. We compute the image volume from 141,080 pairs of high quality receiver functions defined by the Earthscope Automated Receiver Survey, reprocessed by the generalized iterative deconvolution method and imaged by the plane wave migration method. We find that the transition zone is filled with previously unrecognized small-scale heterogeneities that produce pervasive, negative polarity P to S conversions. Seismic synthetic modeling using a point source simulation method suggests two possible structures for these objects: 1) a set of randomly distributed blobs of slight difference in size, and 2) near vertical diapir structures from small scale convections. Combining with geodynamic simulations, we interpret the observation as compositional heterogeneity from small-scale, low-velocity bodies that are water enriched. Our results indicate there is a heterogeneous distribution of water through the entire mantle transition zone beneath the contiguous United States.
Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.
Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S
2017-01-05
The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
What causes the spatial heterogeneity of bacterial flora in the intestine of zebrafish larvae?
Yang, Jinyou; Shimogonya, Yuji; Ishikawa, Takuji
2018-06-07
Microbial flora in the intestine has been thoroughly investigated, as it plays an important role in the health of the host. Jemielita et al. (2014) showed experimentally that Aeromonas bacteria in the intestine of zebrafish larvae have a heterogeneous spatial distribution. Although bacterial aggregation is important biologically and clinically, there is no mathematical model describing the phenomenon and its mechanism remains largely unknown. In this study, we developed a computational model to describe the heterogeneous distribution of bacteria in the intestine of zebrafish larvae. The results showed that biological taxis could cause the bacterial aggregation. Intestinal peristalsis had the effect of reducing bacterial aggregation through mixing function. Using a scaling argument, we showed that the taxis velocity of bacteria must be larger than the sum of the diffusive velocity and background bulk flow velocity to induce bacterial aggregation. Our model and findings will be useful to further the scientific understanding of intestinal microbial flora. Copyright © 2018 Elsevier Ltd. All rights reserved.
Behavior of susceptible-infected-susceptible epidemics on heterogeneous networks with saturation
NASA Astrophysics Data System (ADS)
Joo, Jaewook; Lebowitz, Joel L.
2004-06-01
We investigate saturation effects in susceptible-infected-susceptible models of the spread of epidemics in heterogeneous populations. The structure of interactions in the population is represented by networks with connectivity distribution P(k) , including scale-free (SF) networks with power law distributions P(k)˜ k-γ . Considering cases where the transmission of infection between nodes depends on their connectivity, we introduce a saturation function C(k) which reduces the infection transmission rate λ across an edge going from a node with high connectivity k . A mean-field approximation with the neglect of degree-degree correlation then leads to a finite threshold λc >0 for SF networks with 2<γ⩽3 . We also find, in this approximation, the fraction of infected individuals among those with degree k for λ close to λc . We investigate via computer simulation the contact process on a heterogeneous regular lattice and compare the results with those obtained from mean-field theory with and without neglect of degree-degree correlations.
NASA Astrophysics Data System (ADS)
Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián
2015-11-01
The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.
Spatial intratumoral heterogeneity of proliferation in immunohistochemical images of solid tumors.
Valous, Nektarios A; Lahrmann, Bernd; Halama, Niels; Bergmann, Frank; Jäger, Dirk; Grabe, Niels
2016-06-01
The interactions of neoplastic cells with each other and the microenvironment are complex. To understand intratumoral heterogeneity, subtle differences should be quantified. Main factors contributing to heterogeneity include the gradient ischemic level within neoplasms, action of microenvironment, mechanisms of intercellular transfer of genetic information, and differential mechanisms of modifications of genetic material/proteins. This may reflect on the expression of biomarkers in the context of prognosis/stratification. Hence, a rigorous approach for assessing the spatial intratumoral heterogeneity of histological biomarker expression with accuracy and reproducibility is required, since patterns in immunohistochemical images can be challenging to identify and describe. A quantitative method that is useful for characterizing complex irregular structures is lacunarity; it is a multiscale technique that exhaustively samples the image, while the decay of its index as a function of window size follows characteristic patterns for different spatial arrangements. In histological images, lacunarity provides a useful measure for the spatial organization of a biomarker when a sampling scheme is employed and relevant features are computed. The proposed approach quantifies the segmented proliferative cells and not the textural content of the histological slide, thus providing a more realistic measure of heterogeneity within the sample space of the tumor region. The aim is to investigate in whole sections of primary pancreatic neuroendocrine neoplasms (pNENs), using whole-slide imaging and image analysis, the spatial intratumoral heterogeneity of Ki-67 immunostains. Unsupervised learning is employed to verify that the approach can partition the tissue sections according to distributional heterogeneity. The architectural complexity of histological images has shown that single measurements are often insufficient. Inhomogeneity of distribution depends not only on percentage content of proliferation phase but also on how the phase fills the space. Lacunarity curves demonstrate variations in the sampled image sections. Since the spatial distribution of proliferation in each case is different, the width of the curves changes too. Image sections that have smaller numerical variations in the computed features correspond to neoplasms with spatially homogeneous proliferation, while larger variations correspond to cases where proliferation shows various degrees of clumping. Grade 1 (uniform/nonuniform: 74%/26%) and grade 3 (uniform: 100%) pNENs demonstrate a more homogeneous proliferation with grade 1 neoplasms being more variant, while grade 2 tumor regions render a more diverse landscape (50%/50%). Hence, some cases show an increased degree of spatial heterogeneity comparing to others with similar grade. Whether this is a sign of different tumor biology and an association with a more benign/malignant clinical course needs to be investigated further. The extent and range of spatial heterogeneity has the potential to be evaluated as a prognostic marker. The association with tumor grade as well as the rationale that the methodology reflects true tumor architecture supports the technical soundness of the method. This reflects a general approach which is relevant to other solid tumors and biomarkers. Drawing upon the merits of computational biomedicine, the approach uncovers salient features for use in future studies of clinical relevance.
A hydrological emulator for global applications – HE v1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less
Emergence of energy dependence in the fragmentation of heterogeneous materials
NASA Astrophysics Data System (ADS)
Pál, Gergő; Varga, Imre; Kun, Ferenc
2014-12-01
The most important characteristics of the fragmentation of heterogeneous solids is that the mass (size) distribution of pieces is described by a power law functional form. The exponent of the distribution displays a high degree of universality depending mainly on the dimensionality and on the brittle-ductile mechanical response of the system. Recently, experiments and computer simulations have reported an energy dependence of the exponent increasing with the imparted energy. These novel findings question the phase transition picture of fragmentation phenomena, and have also practical importance for industrial applications. Based on large scale computer simulations here we uncover a robust mechanism which leads to the emergence of energy dependence in fragmentation processes resolving controversial issues on the problem: studying the impact induced breakup of platelike objects with varying thickness in three dimensions we show that energy dependence occurs when a lower dimensional fragmenting object is embedded into a higher dimensional space. The reason is an underlying transition between two distinct fragmentation mechanisms controlled by the impact velocity at low plate thicknesses, while it is hindered for three-dimensional bulk systems. The mass distributions of the subsets of fragments dominated by the two cracking mechanisms proved to have an astonishing robustness at all plate thicknesses, which implies that the nonuniversality of the complete mass distribution is the consequence of blending the contributions of universal partial processes.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli
2017-08-01
We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.
MultiPhyl: a high-throughput phylogenomics webserver using distributed computing
Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.
2007-01-01
With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.
2017-01-01
Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781
Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions
Kanjirathumkal, Cibile K.; Mohammed, Sameer S.
2014-01-01
Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175
An open, object-based modeling approach for simulating subsurface heterogeneity
NASA Astrophysics Data System (ADS)
Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.
2017-12-01
Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.
NASA Astrophysics Data System (ADS)
Shi, X.; Zhang, G.
2013-12-01
Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.
WE-DE-201-12: Thermal and Dosimetric Properties of a Ferrite-Based Thermo-Brachytherapy Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrell, G; Shvydka, D; Parsai, E I
Purpose: The novel thermo-brachytherapy (TB) seed provides a simple means of adding hyperthermia to LDR prostate permanent implant brachytherapy. The high blood perfusion rate (BPR) within the prostate motivates the use of the ferrite and conductive outer layer design for the seed cores. We describe the results of computational analyses of the thermal properties of this ferrite-based TB seed in modelled patient-specific anatomy, as well as studies of the interseed and scatter (ISA) effect. Methods: The anatomies (including the thermophysical properties of the main tissue types) and seed distributions of 6 prostate patients who had been treated with LDR brachytherapymore » seeds were modelled in the finite element analysis software COMSOL, using ferrite-based TB and additional hyperthermia-only (HT-only) seeds. The resulting temperature distributions were compared to those computed for patient-specific seed distributions, but in uniform anatomy with a constant blood perfusion rate. The ISA effect was quantified in the Monte Carlo software package MCNP5. Results: Compared with temperature distributions calculated in modelled uniform tissue, temperature distributions in the patient-specific anatomy were higher and more heterogeneous. Moreover, the maximum temperature to the rectal wall was typically ∼1 °C greater for patient-specific anatomy than for uniform anatomy. The ISA effect of the TB and HT-only seeds caused a reduction in D90 similar to that found for previously-investigated NiCu-based seeds, but of a slightly smaller magnitude. Conclusion: The differences between temperature distributions computed for uniform and patient-specific anatomy for ferrite-based seeds are significant enough that heterogeneous anatomy should be considered. Both types of modelling indicate that ferrite-based seeds provide sufficiently high and uniform hyperthermia to the prostate, without excessively heating surrounding tissues. The ISA effect of these seeds is slightly less than that for the previously-presented NiCu-based seeds.« less
Data analysis environment (DASH2000) for the Subaru telescope
NASA Astrophysics Data System (ADS)
Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki
2000-06-01
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
Distribution of thermal neutrons in a temperature gradient
NASA Astrophysics Data System (ADS)
Molinari, V. G.; Pollachini, L.
A method to determine the spatial distribution of the thermal spectrum of neutrons in heterogeneous systems is presented. The method is based on diffusion concepts and has a simple mathematical structure which increases computing efficiency. The application of this theory to the neutron thermal diffusion induced by a temperature gradient, as found in nuclear reactors, is described. After introducing approximations, a nonlinear equation system representing the neutron temperature is given. Values of the equation parameters and its dependence on geometrical factors and media characteristics are discussed.
A Component-based Programming Model for Composite, Distributed Applications
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.
Semi-automated quantification and neuroanatomical mapping of heterogeneous cell populations.
Mendez, Oscar A; Potter, Colin J; Valdez, Michael; Bello, Thomas; Trouard, Theodore P; Koshy, Anita A
2018-07-15
Our group studies the interactions between cells of the brain and the neurotropic parasite Toxoplasma gondii. Using an in vivo system that allows us to permanently mark and identify brain cells injected with Toxoplasma protein, we have identified that Toxoplasma-injected neurons (TINs) are heterogeneously distributed throughout the brain. Unfortunately, standard methods to quantify and map heterogeneous cell populations onto a reference brain atlas are time consuming and prone to user bias. We developed a novel MATLAB-based semi-automated quantification and mapping program to allow the rapid and consistent mapping of heterogeneously distributed cells on to the Allen Institute Mouse Brain Atlas. The system uses two-threshold background subtraction to identify and quantify cells of interest. We demonstrate that we reliably quantify and neuroanatomically localize TINs with low intra- or inter-observer variability. In a follow up experiment, we show that specific regions of the mouse brain are enriched with TINs. The procedure we use takes advantage of simple immunohistochemistry labeling techniques, use of a standard microscope with a motorized stage, and low cost computing that can be readily obtained at a research institute. To our knowledge there is no other program that uses such readily available techniques and equipment for mapping heterogeneous populations of cells across the whole mouse brain. The quantification method described here allows reliable visualization, quantification, and mapping of heterogeneous cell populations in immunolabeled sections across whole mouse brains. Copyright © 2018 Elsevier B.V. All rights reserved.
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, W; Farr, J
2015-06-15
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
A distributed program composition system
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1989-01-01
A graphical technique for creating distributed computer programs is investigated and a prototype implementation is described which serves as a testbed for the concepts. The type of programs under examination is restricted to those comprising relatively heavyweight parts that intercommunicate by passing messages of typed objects. Such programs are often presented visually as a directed graph with computer program parts as the nodes and communication channels as the edges. This class of programs, called parts-based programs, is not well supported by existing computer systems; much manual work is required to describe the program to the system, establish the communication paths, accommodate the heterogeneity of data types, and to locate the parts of the program on the various systems involved. The work described solves most of these problems by providing an interface for describing parts-based programs in this class in a way that closely models the way programmers think about them: using sketches of diagraphs. Program parts, the computational modes of the larger program system are categorized in libraries and are accessed with browsers. The process of programming has the programmer draw the program graph interactively. Heterogeneity is automatically accommodated by the insertion of type translators where necessary between the parts. Many decisions are necessary in the creation of a comprehensive tool for interactive creation of programs in this class. Possibilities are explored and the issues behind such decisions are presented. An approach to program composition is described, not a carefully implemented programming environment. However, a prototype implementation is described that can demonstrate the ideas presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Accounting for small scale heterogeneity in ecohydrologic watershed models
NASA Astrophysics Data System (ADS)
Bhaskar, A.; Fleming, B.; Hogan, D. M.
2016-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.
Accounting for small scale heterogeneity in ecohydrologic watershed models
NASA Astrophysics Data System (ADS)
Burke, W.; Tague, C.
2017-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.
A survey of CPU-GPU heterogeneous computing techniques
Mittal, Sparsh; Vetter, Jeffrey S.
2015-07-04
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
A survey of CPU-GPU heterogeneous computing techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
Pilots 2.0: DIRAC pilots for all the skies
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.
2015-12-01
In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
Dinov, Ivo D
2016-01-01
Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Pandey, Vaibhav; Saini, Poonam
2018-06-01
MapReduce (MR) computing paradigm and its open source implementation Hadoop have become a de facto standard to process big data in a distributed environment. Initially, the Hadoop system was homogeneous in three significant aspects, namely, user, workload, and cluster (hardware). However, with growing variety of MR jobs and inclusion of different configurations of nodes in the existing cluster, heterogeneity has become an essential part of Hadoop systems. The heterogeneity factors adversely affect the performance of a Hadoop scheduler and limit the overall throughput of the system. To overcome this problem, various heterogeneous Hadoop schedulers have been proposed in the literature. Existing survey works in this area mostly cover homogeneous schedulers and classify them on the basis of quality of service parameters they optimize. Hence, there is a need to study the heterogeneous Hadoop schedulers on the basis of various heterogeneity factors considered by them. In this survey article, we first discuss different heterogeneity factors that typically exist in a Hadoop system and then explore various challenges that arise while designing the schedulers in the presence of such heterogeneity. Afterward, we present the comparative study of heterogeneous scheduling algorithms available in the literature and classify them by the previously said heterogeneity factors. Lastly, we investigate different methods and environment used for evaluation of discussed Hadoop schedulers.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
Methodologies and systems for heterogeneous concurrent computing
NASA Technical Reports Server (NTRS)
Sunderam, V. S.
1994-01-01
Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.
NASA Astrophysics Data System (ADS)
Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.
2012-12-01
The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3 (1m x 1m x 0.6m) locations in a 60m x 60m x 18m modeled volume of the BHRS, with the primary investigated volume approximately 12m x 8m x 16m. Computing times are reasonable on high-end desktop computers or small clusters; we are investigating additional efficiency improvements with massive parallelization. Results from complete coverage (1m-length zones) in one pumping well and five observation wells provide a basis for evaluating method resolution capabilities by comparing K statistics from solutions with all tests and observations against partial test and observation coverage, and against independent K measurements at wells with multi-level slug tests. From these analyses we show that 3DTHT compares well with slug test results, and high-resolution information on heterogeneity is lost rapidly with reduction in test or observation coverage.
Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes
2012-01-01
Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
Workflow Management Systems for Molecular Dynamics on Leadership Computers
NASA Astrophysics Data System (ADS)
Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu
Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling.
Rodinger, Tomas; Howell, P Lynne; Pomès, Régis
2008-10-21
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling
NASA Astrophysics Data System (ADS)
Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis
2008-10-01
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Balachandar, Arjun; Prescott, Steven A
2018-05-01
Distinct spiking patterns may arise from qualitative differences in ion channel expression (i.e. when different neurons express distinct ion channels) and/or when quantitative differences in expression levels qualitatively alter the spike generation process. We hypothesized that spiking patterns in neurons of the superficial dorsal horn (SDH) of spinal cord reflect both mechanisms. We reproduced SDH neuron spiking patterns by varying densities of K V 1- and A-type potassium conductances. Plotting the spiking patterns that emerge from different density combinations revealed spiking-pattern regions separated by boundaries (bifurcations). This map suggests that certain spiking pattern combinations occur when the distribution of potassium channel densities straddle boundaries, whereas other spiking patterns reflect distinct patterns of ion channel expression. The former mechanism may explain why certain spiking patterns co-occur in genetically identified neuron types. We also present algorithms to predict spiking pattern proportions from ion channel density distributions, and vice versa. Neurons are often classified by spiking pattern. Yet, some neurons exhibit distinct patterns under subtly different test conditions, which suggests that they operate near an abrupt transition, or bifurcation. A set of such neurons may exhibit heterogeneous spiking patterns not because of qualitative differences in which ion channels they express, but rather because quantitative differences in expression levels cause neurons to operate on opposite sides of a bifurcation. Neurons in the spinal dorsal horn, for example, respond to somatic current injection with patterns that include tonic, single, gap, delayed and reluctant spiking. It is unclear whether these patterns reflect five cell populations (defined by distinct ion channel expression patterns), heterogeneity within a single population, or some combination thereof. We reproduced all five spiking patterns in a computational model by varying the densities of a low-threshold (K V 1-type) potassium conductance and an inactivating (A-type) potassium conductance and found that single, gap, delayed and reluctant spiking arise when the joint probability distribution of those channel densities spans two intersecting bifurcations that divide the parameter space into quadrants, each associated with a different spiking pattern. Tonic spiking likely arises from a separate distribution of potassium channel densities. These results argue in favour of two cell populations, one characterized by tonic spiking and the other by heterogeneous spiking patterns. We present algorithms to predict spiking pattern proportions based on ion channel density distributions and, conversely, to estimate ion channel density distributions based on spiking pattern proportions. The implications for classifying cells based on spiking pattern are discussed. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Wormlike Chain Theory and Bending of Short DNA
NASA Astrophysics Data System (ADS)
Mazur, Alexey K.
2007-05-01
The probability distributions for bending angles in double helical DNA obtained in all-atom molecular dynamics simulations are compared with theoretical predictions. The computed distributions remarkably agree with the wormlike chain theory and qualitatively differ from predictions of the subelastic chain model. The computed data exhibit only small anomalies in the apparent flexibility of short DNA and cannot account for the recently reported AFM data. It is possible that the current atomistic DNA models miss some essential mechanisms of DNA bending on intermediate length scales. Analysis of bent DNA structures reveal, however, that the bending motion is structurally heterogeneous and directionally anisotropic on the length scales where the experimental anomalies were detected. These effects are essential for interpretation of the experimental data and they also can be responsible for the apparent discrepancy.
MONET: multidimensional radiative cloud scene model
NASA Astrophysics Data System (ADS)
Chervet, Patrick
1999-12-01
All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.
Ariyasu, Aoi; Hattori, Yusuke; Otsuka, Makoto
2017-06-15
The coating layer thickness of enteric-coated tablets is a key factor that determines the drug dissolution rate from the tablet. Near-infrared spectroscopy (NIRS) enables non-destructive and quick measurement of the coating layer thickness, and thus allows the investigation of the relation between enteric coating layer thickness and drug dissolution rate. Two marketed products of aspirin enteric-coated tablets were used in this study, and the correlation between the predicted coating layer thickness and the obtained drug dissolution rate was investigated. Our results showed correlation for one product; the drug dissolution rate decreased with the increase in enteric coating layer thickness, whereas, there was no correlation for the other product. Additional examination of the distribution of coating layer thickness by X-ray computed tomography (CT) showed homogenous distribution of coating layer thickness for the former product, whereas the latter product exhibited heterogeneous distribution within the tablet, as well as inconsistent trend in the thickness distribution between the tablets. It was suggested that this heterogeneity and inconsistent trend in layer thickness distribution contributed to the absence of correlation between the layer thickness of the face and side regions of the tablets, which resulted in the loss of correlation between the coating layer thickness and drug dissolution rate. Therefore, the predictability of drug dissolution rate from enteric-coated tablets depended on the homogeneity of the coating layer thickness. In addition, the importance of micro analysis, X-ray CT in this study, was suggested even if the macro analysis, NIRS in this study, are finally applied for the measurement. Copyright © 2017 Elsevier B.V. All rights reserved.
A weighted U statistic for association analyses considering genetic heterogeneity.
Wei, Changshuai; Elston, Robert C; Lu, Qing
2016-07-20
Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gjetvaj, Filip; Russian, Anna; Gouze, Philippe; Dentz, Marco
2015-10-01
Both flow field heterogeneity and mass transfer between mobile and immobile domains have been studied separately for explaining observed anomalous transport. Here we investigate non-Fickian transport using high-resolution 3-D X-ray microtomographic images of Berea sandstone containing microporous cement with pore size below the setup resolution. Transport is computed for a set of representative elementary volumes and results from advection and diffusion in the resolved macroporosity (mobile domain) and diffusion in the microporous phase (immobile domain) where the effective diffusion coefficient is calculated from the measured local porosity using a phenomenological model that includes a porosity threshold (ϕθ) below which diffusion is null and the exponent n that characterizes tortuosity-porosity power-law relationship. We show that both flow field heterogeneity and microporosity trigger anomalous transport. Breakthrough curve (BTC) tailing is positively correlated to microporosity volume and mobile-immobile interface area. The sensitivity analysis showed that the BTC tailing increases with the value of ϕθ, due to the increase of the diffusion path tortuosity until the volume of the microporosity becomes negligible. Furthermore, increasing the value of n leads to an increase in the standard deviation of the distribution of effective diffusion coefficients, which in turn results in an increase of the BTC tailing. Finally, we propose a continuous time random walk upscaled model where the transition time is the sum of independently distributed random variables characterized by specific distributions. It allows modeling a 1-D equivalent macroscopic transport honoring both the control of the flow field heterogeneity and the multirate mass transfer between mobile and immobile domains.
NASA Astrophysics Data System (ADS)
Jougnot, D.; Jimenez-Martinez, J.; Legendre, R.; Le Borgne, T.; Meheust, Y.; Linde, N.
2017-12-01
The use of time-lapse electrical resistivity tomography has been largely developed in environmental studies to remotely monitor water saturation and contaminant plumes migration. However, subsurface heterogeneities, and corresponding preferential transport paths, yield a potentially large anisotropy in the electrical properties of the subsurface. In order to study this effect, we have used a newly developed geoelectrical milli-fluidic experimental set-up with a flow cell that contains a 2D porous medium consisting of a single layer of cylindrical solid grains. We performed saline tracer tests under full and partial water saturations in that cell by jointly injecting air and aqueous solutions with different salinities. The flow cell is equipped with four electrodes to measure the bulk electrical resistivity at the cell's scale. The spatial distribution of the water/air phases and the saline solute concentration field in the water phase are captured simultaneously with a high-resolution camera by combining a fluorescent tracer with the saline solute. These data are used to compute the longitudinal and transverse effective electrical resistivity numerically from the measured spatial distributions of the fluid phases and the salinity field. This approach is validated as the computed longitudinal effective resistivities are in good agreement with the laboratory measurements. The anisotropy in electrical resistivity is then inferred from the computed longitudinal and transverse effective resistivities. We find that the spatial distribution of saline tracer, and potentially air phase, drive temporal changes in the effective resistivity through preferential paths or barriers for electrical current at the pore scale. The resulting heterogeneities in the solute concentrations lead to strong anisotropy of the effective bulk electrical resistivity, especially for partially saturated conditions. Therefore, considering the electrical resistivity as a tensor could improve our understanding of transport properties from field-scale time-lapse ERT.
Heterogeneity in Health Care Computing Environments
Sengupta, Soumitra
1989-01-01
This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.
Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392
Radiation detection and situation management by distributed sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Frigo; Mielke, Angela; Cai, D Michael
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from different radiation detectorsmore » are combined to improve the detection confidence. In addition, the DSN exploits other sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class (e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node. Relevant information from each node is sent to a base station computer which is used to assess the movement of radioactive materials.« less
NASA Astrophysics Data System (ADS)
Zhao, Yixin; Xue, Shanbin; Han, Songbai; Chen, Zhongwei; Liu, Shimin; Elsworth, Derek; He, Linfeng; Cai, Jianchao; Liu, Yuntao; Chen, Dongfeng
2017-07-01
Capillary imbibition in variably saturated porous media is important in defining displacement processes and transport in the vadose zone and in low-permeability barriers and reservoirs. Nonintrusive imaging in real time offers the potential to examine critical impacts of heterogeneity and surface properties on imbibition dynamics. Neutron radiography is applied as a powerful imaging tool to observe temporal changes in the spatial distribution of water in porous materials. We analyze water imbibition in both homogeneous and heterogeneous low-permeability sandstones. Dynamic observations of the advance of the imbibition front with time are compared with characterizations of microstructure (via high-resolution X-ray computed tomography (CT)), pore size distribution (Mercury Intrusion Porosimetry), and permeability of the contrasting samples. We use an automated method to detect the progress of wetting front with time and link this to square-root-of-time progress. These data are used to estimate the effect of microstructure on water sorptivity from a modified Lucas-Washburn equation. Moreover, a model is established to calculate the maximum capillary diameter by modifying the Hagen-Poiseuille and Young-Laplace equations based on fractal theory. Comparing the calculated maximum capillary diameter with the maximum pore diameter (from high-resolution CT) shows congruence between the two independent methods for the homogeneous silty sandstone but less effectively for the heterogeneous sandstone. Finally, we use these data to link observed response with the physical characteristics of the contrasting media—homogeneous versus heterogeneous—and to demonstrate the sensitivity of sorptivity expressly to tortuosity rather than porosity in low-permeability sandstones.
Individual vision and peak distribution in collective actions
NASA Astrophysics Data System (ADS)
Lu, Peng
2017-06-01
People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.
Hayat, T.; Hussain, Zakir; Alsaedi, A.; Farooq, M.
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ—perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears. PMID:27280883
A Semantic Big Data Platform for Integrating Heterogeneous Wearable Data in Healthcare.
Mezghani, Emna; Exposito, Ernesto; Drira, Khalil; Da Silveira, Marcos; Pruski, Cédric
2015-12-01
Advances supported by emerging wearable technologies in healthcare promise patients a provision of high quality of care. Wearable computing systems represent one of the most thrust areas used to transform traditional healthcare systems into active systems able to continuously monitor and control the patients' health in order to manage their care at an early stage. However, their proliferation creates challenges related to data management and integration. The diversity and variety of wearable data related to healthcare, their huge volume and their distribution make data processing and analytics more difficult. In this paper, we propose a generic semantic big data architecture based on the "Knowledge as a Service" approach to cope with heterogeneity and scalability challenges. Our main contribution focuses on enriching the NIST Big Data model with semantics in order to smartly understand the collected data, and generate more accurate and valuable information by correlating scattered medical data stemming from multiple wearable devices or/and from other distributed data sources. We have implemented and evaluated a Wearable KaaS platform to smartly manage heterogeneous data coming from wearable devices in order to assist the physicians in supervising the patient health evolution and keep the patient up-to-date about his/her status.
Hayat, T; Hussain, Zakir; Alsaedi, A; Farooq, M
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ-perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2013-12-01
Climate change may alter the spatial distribution, composition, structure, and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate solar radiation absorbed by individual plants for understanding and predicting their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the analytical solutions of random distributions of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and is suitable for ecological models to simulate long-term transient responses of plant communities to climate change.
Spatial Metrics of Tumour Vascular Organisation Predict Radiation Efficacy in a Computational Model
Scott, Jacob G.
2016-01-01
Intratumoural heterogeneity is known to contribute to poor therapeutic response. Variations in oxygen tension in particular have been correlated with changes in radiation response in vitro and at the clinical scale with overall survival. Heterogeneity at the microscopic scale in tumour blood vessel architecture has been described, and is one source of the underlying variations in oxygen tension. We seek to determine whether histologic scale measures of the erratic distribution of blood vessels within a tumour can be used to predict differing radiation response. Using a two-dimensional hybrid cellular automaton model of tumour growth, we evaluate the effect of vessel distribution on cell survival outcomes of simulated radiation therapy. Using the standard equations for the oxygen enhancement ratio for cell survival probability under differing oxygen tensions, we calculate average radiation effect over a range of different vessel densities and organisations. We go on to quantify the vessel distribution heterogeneity and measure spatial organization using Ripley’s L function, a measure designed to detect deviations from complete spatial randomness. We find that under differing regimes of vessel density the correlation coefficient between the measure of spatial organization and radiation effect changes sign. This provides not only a useful way to understand the differences seen in radiation effect for tissues based on vessel architecture, but also an alternate explanation for the vessel normalization hypothesis. PMID:26800503
A radiosity model for heterogeneous canopies in remote sensing
NASA Astrophysics Data System (ADS)
GarcíA-Haro, F. J.; Gilabert, M. A.; Meliá, J.
1999-05-01
A radiosity model has been developed to compute bidirectional reflectance from a heterogeneous canopy approximated by an arbitrary configuration of plants or clumps of vegetation, placed on the ground surface in a prescribed manner. Plants are treated as porous cylinders formed by aggregations of layers of leaves. This model explicitly computes solar radiation leaving each individual surface, taking into account multiple scattering processes between leaves and soil, and occlusion of neighboring plants. Canopy structural parameters adopted in this study have served to simplify the computation of the geometric factors of the radiosity equation, and thus this model has enabled us to simulate multispectral images of vegetation scenes. Simulated images have shown to be valuable approximations of satellite data, and then a sensitivity analysis to the dominant parameters of discontinuous canopies (plant density, leaf area index (LAI), leaf angle distribution (LAD), plant dimensions, soil optical properties, etc.) and scene (sun/ view angles and atmospheric conditions) has been undertaken. The radiosity model has let us gain a deep insight into the radiative regime inside the canopy, showing it to be governed by occlusion of incoming irradiance, multiple scattering of radiation between canopy elements and interception of upward radiance by leaves. Results have indicated that unlike leaf distribution, other structural parameters such as LAI, LAD, and plant dimensions have a strong influence on canopy reflectance. In addition, concepts have been developed that are useful to understand the reflectance behavior of the canopy, such as an effective LAI related to leaf inclination.
Lagerlöf, Jakob H; Bernhardt, Peter
2016-01-01
To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.
1991-01-01
EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE
Introduction: The SERENITY vision
NASA Astrophysics Data System (ADS)
Maña, Antonio; Spanoudakis, George; Kokolakis, Spyros
In this chapter we present an overview of the SERENITY approach. We describe the SERENITY model of secure and dependable applications and show how it addresses the challenge of developing, integrating and dynamically maintaining security and dependability mechanisms in open, dynamic, distributed and heterogeneous computing systems and in particular Ambient Intelligence scenarios. The chapter describes the basic concepts used in the approach and introduces the different processes supported by SERENITY, along with the tools provided.
A world-wide databridge supported by a commercial cloud provider
NASA Astrophysics Data System (ADS)
Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio
2017-10-01
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
Strong Effects of Vs30 Heterogeneity on Physics-Based Scenario Ground-Shaking Computations
NASA Astrophysics Data System (ADS)
Louie, J. N.; Pullammanappallil, S. K.
2014-12-01
Hazard mapping and building codes worldwide use the vertically time-averaged shear-wave velocity between the surface and 30 meters depth, Vs30, as one predictor of earthquake ground shaking. Intensive field campaigns a decade ago in Reno, Los Angeles, and Las Vegas measured urban Vs30 transects with 0.3-km spacing. The Clark County, Nevada, Parcel Map includes urban Las Vegas and comprises over 10,000 site measurements over 1500 km2, completed in 2010. All of these data demonstrate fractal spatial statistics, with a fractal dimension of 1.5-1.8 at scale lengths from 0.5 km to 50 km. Vs measurements in boreholes up to 400 m deep show very similar statistics at 1 m to 200 m lengths. When included in physics-based earthquake-scenario ground-shaking computations, the highly heterogeneous Vs30 maps exhibit unexpectedly strong influence. In sensitivity tests (image below), low-frequency computations at 0.1 Hz display amplifications (as well as de-amplifications) of 20% due solely to Vs30. In 0.5-1.0 Hz computations, the amplifications are a factor of two or more. At 0.5 Hz and higher frequencies the amplifications can be larger than what the 1-d Building Code equations would predict from the Vs30 variations. Vs30 heterogeneities at one location have strong influence on amplifications at other locations, stretching out in the predominant direction of wave propagation for that scenario. The sensitivity tests show that shaking and amplifications are highly scenario-dependent. Animations of computed ground motions and how they evolve with time suggest that the fractal Vs30 variance acts to trap wave energy and increases the duration of shaking. Validations of the computations against recorded ground motions, possible in Las Vegas Valley due to the measurements of the Clark County Parcel Map, show that ground motion levels and amplifications match, while recorded shaking has longer duration than computed shaking. Several mechanisms may explain the amplification and increased duration of shaking in the presence of heterogeneous spatial distributions of Vs: conservation of wave energy across velocity changes; geometric focusing of waves by low-velocity lenses; vertical resonance and trapping; horizontal resonance and trapping; and multiple conversion of P- to S-wave energy.
Pore Pressure and Stress Distributions Around a Hydraulic Fracture in Heterogeneous Rock
NASA Astrophysics Data System (ADS)
Gao, Qian; Ghassemi, Ahmad
2017-12-01
One of the most significant characteristics of unconventional petroleum bearing formations is their heterogeneity, which affects the stress distribution, hydraulic fracture propagation and also fluid flow. This study focuses on the stress and pore pressure redistributions during hydraulic stimulation in a heterogeneous poroelastic rock. Lognormal random distributions of Young's modulus and permeability are generated to simulate the heterogeneous distributions of material properties. A 3D fully coupled poroelastic model based on the finite element method is presented utilizing a displacement-pressure formulation. In order to verify the model, numerical results are compared with analytical solutions showing excellent agreements. The effects of heterogeneities on stress and pore pressure distributions around a penny-shaped fracture in poroelastic rock are then analyzed. Results indicate that the stress and pore pressure distributions are more complex in a heterogeneous reservoir than in a homogeneous one. The spatial extent of stress reorientation during hydraulic stimulations is a function of time and is continuously changing due to the diffusion of pore pressure in the heterogeneous system. In contrast to the stress distributions in homogeneous media, irregular distributions of stresses and pore pressure are observed. Due to the change of material properties, shear stresses and nonuniform deformations are generated. The induced shear stresses in heterogeneous rock cause the initial horizontal principal stresses to rotate out of horizontal planes.
2010-01-01
Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909
Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude
2010-01-26
In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Gong, Huili; Dai, Zhenxue
Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of K in alluvial fans.« less
Zhu, Lin; Gong, Huili; Dai, Zhenxue; ...
2017-02-03
Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of K in alluvial fans.« less
Ogawa, S.; Komini Babu, S.; Chung, H. T.; ...
2016-08-22
The nano/micro-scale geometry of polymer electrolyte fuel cell (PEFC) catalyst layers critically affects cell performance. The small length scales and complex structure of these composite layers make it challenging to analyze cell performance and physics at the particle scale by experiment. We present a computational method to simulate transport and chemical reaction phenomena at the pore/particle-scale and apply it to a PEFC cathode with platinum group metal free (PGM-free) catalyst. Here, we numerically solve the governing equations for the physics with heterogeneous oxygen diffusion coefficient and proton conductivity evaluated using the actual electrode structure and ionomer distribution obtained using nano-scalemore » resolution X-ray computed tomography (nano-CT). Using this approach, the oxygen concentration and electrolyte potential distributions imposed by the oxygen reduction reaction are solved and the impact of the catalyst layer structure on performance is evaluated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, S.; Komini Babu, S.; Chung, H. T.
The nano/micro-scale geometry of polymer electrolyte fuel cell (PEFC) catalyst layers critically affects cell performance. The small length scales and complex structure of these composite layers make it challenging to analyze cell performance and physics at the particle scale by experiment. We present a computational method to simulate transport and chemical reaction phenomena at the pore/particle-scale and apply it to a PEFC cathode with platinum group metal free (PGM-free) catalyst. Here, we numerically solve the governing equations for the physics with heterogeneous oxygen diffusion coefficient and proton conductivity evaluated using the actual electrode structure and ionomer distribution obtained using nano-scalemore » resolution X-ray computed tomography (nano-CT). Using this approach, the oxygen concentration and electrolyte potential distributions imposed by the oxygen reduction reaction are solved and the impact of the catalyst layer structure on performance is evaluated.« less
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Validation of cortical bone mineral density distribution using micro-computed tomography.
Mashiatulla, Maleeha; Ross, Ryan D; Sumner, D Rick
2017-06-01
Changes in the bone mineral density distribution (BMDD), due to disease or drugs, can alter whole bone mechanical properties such as strength, stiffness and toughness. The methods currently available for assessing BMDD are destructive and two-dimensional. Micro-computed tomography (μCT) has been used extensively to quantify the three-dimensional geometry of bone and to measure the mean degree of mineralization, commonly called the tissue mineral density (TMD). The TMD measurement has been validated to ash density; however parameters describing the frequency distribution of TMD have not yet been validated. In the current study we tested the ability of μCT to estimate six BMDD parameters: mean, heterogeneity (assessed by the full-width-at-half-maximum (FWHM) and the coefficient of variation (CoV)), the upper and lower 5% cutoffs of the frequency distribution, and peak mineralization) in rat sized femoral cortical bone samples. We used backscatter scanning electron microscopy (bSEM) as the standard. Aluminum and hydroxyapatite phantoms were used to identify optimal scanner settings (70kVp, and 57μA, with a 1500ms integration time). When using hydroxyapatite samples that spanned a broad range of mineralization levels, high correlations were found between μCT and bSEM for all BMDD parameters (R 2 ≥0.92, p<0.010). When using cortical bone samples from rats and various species machined to mimic rat cortical bone geometry, significant correlations between μCT and bSEM were found for mean mineralization (R 2 =0.65, p<0.001), peak mineralization (R 2 =0.61, p<0.001) the lower 5% cutoff (R 2 =0.62, p<0.001) and the upper 5% cutoff (R 2 =0.33, p=0.021), but not for heterogeneity, measured by FWHM (R 2 =0.05, p=0.412) and CoV (R 2 =0.04, p=0.469). Thus, while mean mineralization and most parameters used to characterize the BMDD can be assessed with μCT in rat sized cortical bone samples, caution should be used when reporting the heterogeneity. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Heterogeneity-induced large deviations in activity and (in some cases) entropy production
NASA Astrophysics Data System (ADS)
Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.
2014-10-01
We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.
Ekdawi, Sandra N; Stewart, James M P; Dunne, Michael; Stapleton, Shawn; Mitsakakis, Nicholas; Dou, Yannan N; Jaffray, David A; Allen, Christine
2015-06-10
Existing paradigms in nano-based drug delivery are currently being challenged. Assessment of bulk tumor accumulation has been routinely considered an indicative measure of nanomedicine potency. However, it is now recognized that the intratumoral distribution of nanomedicines also impacts their therapeutic effect. At this time, our understanding of the relationship between the bulk (i.e., macro-) tumor accumulation of nanocarriers and their intratumoral (i.e., micro-) distribution remains limited. Liposome-based drug formulations, in particular, suffer from diminished efficacy in vivo as a result of transport-limiting properties, combined with the heterogeneous nature of the tumor microenvironment. In this report, we perform a quantitative image-based assessment of macro- and microdistribution of liposomes. Multi-scalar assessment of liposome distribution was enabled by a stable formulation which co-encapsulates an iodinated contrast agent and a near-infrared fluorescence probe, for computed tomography (CT) and optical microscopy, respectively. Spatio-temporal quantification of tumor uptake in orthotopic xenografts was performed using CT at the bulk tissue level, and within defined sub-volumes of the tumor (i.e., rim, periphery and core). Tumor penetration and relative distribution of liposomes were assessed by fluorescence microscopy of whole tumor sections. Microdistribution analysis of whole tumor images exposed a heterogeneous distribution of both liposomes and tumor vasculature. Highest levels of liposome uptake were achieved and maintained in the well-vascularized tumor rim over the study period, corresponding to a positive correlation between liposome and microvascular density. Tumor penetration of liposomes was found to be time-dependent in all regions of the tumor however independent of location in the tumor. Importantly, a multi-scalar comparison of liposome distribution reveals that macro-accumulation in tissues (e.g., blood, whole tumor) may not reflect micro-accumulation levels present within specific regions of the tumor as a function of time. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards full waveform ambient noise inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas
2018-01-01
In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.
Wald, D.J.; Graves, R.W.
2001-01-01
Using numerical tests for a prescribed heterogeneous earthquake slip distribution, we examine the importance of accurate Green's functions (GF) for finite fault source inversions which rely on coseismic GPS displacements and leveling line uplift alone and in combination with near-source strong ground motions. The static displacements, while sensitive to the three-dimensional (3-D) structure, are less so than seismic waveforms and thus are an important contribution, particularly when used in conjunction with waveform inversions. For numerical tests of an earthquake source and data distribution modeled after the 1994 Northridge earthquake, a joint geodetic and seismic inversion allows for reasonable recovery of the heterogeneous slip distribution on the fault. In contrast, inaccurate 3-D GFs or multiple 1-D GFs allow only partial recovery of the slip distribution given strong motion data alone. Likewise, using just the GPS and leveling line data requires significant smoothing for inversion stability, and hence, only a blurred vision of the prescribed slip is recovered. Although the half-space approximation for computing the surface static deformation field is no longer justifiable based on the high level of accuracy for current GPS data acquisition and the computed differences between 3-D and half-space surface displacements, a layered 1-D approximation to 3-D Earth structure provides adequate representation of the surface displacement field. However, even with the half-space approximation, geodetic data can provide additional slip resolution in the joint seismic and geodetic inversion provided a priori fault location and geometry are correct. Nevertheless, the sensitivity of the static displacements to the Earth structure begs caution for interpretation of surface displacements, particularly those recorded at monuments located in or near basin environments. Copyright 2001 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Marinos, Alexandros; Briscoe, Gerard
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
Heterogeneous game resource distributions promote cooperation in spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Cui, Guang-Hai; Wang, Zhen; Yang, Yan-Cun; Tian, Sheng-Wen; Yue, Jun
2018-01-01
In social networks, individual abilities to establish interactions are always heterogeneous and independent of the number of topological neighbors. We here study the influence of heterogeneous distributions of abilities on the evolution of individual cooperation in the spatial prisoner's dilemma game. First, we introduced a prisoner's dilemma game, taking into account individual heterogeneous abilities to establish games, which are determined by the owned game resources. Second, we studied three types of game resource distributions that follow the power-law property. Simulation results show that the heterogeneous distribution of individual game resources can promote cooperation effectively, and the heterogeneous level of resource distributions has a positive influence on the maintenance of cooperation. Extensive analysis shows that cooperators with large resource capacities can foster cooperator clusters around themselves. Furthermore, when the temptation to defect is high, cooperator clusters in which the central pure cooperators have larger game resource capacities are more stable than other cooperator clusters.
Intelligent Agents for the Digital Battlefield
1998-11-01
specific outcome of our long term research will be the development of a collaborative agent technology system, CATS , that will provide the underlying...software infrastructure needed to build large, heterogeneous, distributed agent applications. CATS will provide a software environment through which multiple...intelligent agents may interact with other agents, both human and computational. In addition, CATS will contain a number of intelligent agent components that will be useful for a wide variety of applications.
Differential subcellular distribution of ion channels and the diversity of neuronal function.
Nusser, Zoltan
2012-06-01
Following the astonishing molecular diversity of voltage-gated ion channels that was revealed in the past few decades, the ion channel repertoire expressed by neurons has been implicated as the major factor governing their functional heterogeneity. Although the molecular structure of ion channels is a key determinant of their biophysical properties, their subcellular distribution and densities on the surface of nerve cells are just as important for fulfilling functional requirements. Recent results obtained with high resolution quantitative localization techniques revealed complex, subcellular compartment-specific distribution patterns of distinct ion channels. Here I suggest that within a given neuron type every ion channel has a unique cell surface distribution pattern, with the functional consequence that this dramatically increases the computational power of nerve cells. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kanematsu, Nobuyuki; Komori, Masataka; Yonai, Shunsuke; Ishizaki, Azusa
2009-04-07
The pencil-beam algorithm is valid only when elementary Gaussian beams are small enough compared to the lateral heterogeneity of a medium, which is not always true in actual radiotherapy with protons and ions. This work addresses a solution for the problem. We found approximate self-similarity of Gaussian distributions, with which Gaussian beams can split into narrower and deflecting daughter beams when their sizes have overreached lateral heterogeneity in the beam-transport calculation. The effectiveness was assessed in a carbon-ion beam experiment in the presence of steep range compensation, where the splitting calculation reproduced a detour effect amounting to about 10% in dose or as large as the lateral particle disequilibrium effect. The efficiency was analyzed in calculations for carbon-ion and proton radiations with a heterogeneous phantom model, where the beam splitting increased computing times by factors of 4.7 and 3.2. The present method generally improves the accuracy of the pencil-beam algorithm without severe inefficiency. It will therefore be useful for treatment planning and potentially other demanding applications.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)
2000-01-01
The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.
Experimental evidence of the role of pores on movement and distribution of bacteria in soil
NASA Astrophysics Data System (ADS)
Kravchenko, Alexandra N.; Rose, Joan B.; Marsh, Terence L.; Guber, Andrey K.
2014-05-01
It has been generally recognized that micro-scale heterogeneity in soil environments can have a substantial effect on movement, fate, and survival of soil microorganisms. However, only recently the development of tools for micro-scale soil analyses, including X-ray computed micro-tomography (μ-CT), enabled quantitative analyses of these effects. The long-term goal of our work is to explore how differences in micro-scale characteristics of pore structures influence movement, spatial distribution patterns, and activities of soil microorganisms. Using X-ray μ-CT we found that differences in land use and management practices lead to development of contrasting patterns in pore size-distributions within intact soil aggregates. Then our experiments with Escherichia coli added to intact soil aggregates demonstrated that the differences in pore structures can lead to substantial differences in bacteria redistribution and movement within the aggregates. Specifically, we observed more uniform E.coli redistribution in aggregates with homogeneously spread pores, while heterogeneous pore structures resulted in heterogeneous E.coli patterns. Water flow driven by capillary forces through intact aggregate pores appeared to be the main contributor to the movement patterns of the introduced bacteria. Influence of pore structure on E.coli distribution within the aggregates further continued after the aggregates were subjected to saturated water flow. E. coli's resumed movement with saturated water flow and subsequent redistribution within the soil matrix was influenced by porosity, abundance of medium and large pores, pore tortuosity, and flow rates, indicating that greater flow accompanied by less convoluted pores facilitated E. coli transport within the intra-aggregate space. We also found that intra-aggregate heterogeneity of pore structures can have an effect on spatial distribution patterns of indigenous microbial populations. Preliminary analysis showed that in aggregates from an organic agricultural system with cover crops, characterized by greater intra-aggregate pore heterogeneity, bacteria of Actinobacteria and Firmicutes groups were more abundant in presence of large as compared to small pores. In contrast, no differences were observed in the aggregates from conventionally managed soil, overall characterized by homogeneous intra-aggregate pore patterns. Further research efforts are being directed towards quantification of the pore structure effects on activities and community composition of soil microorganisms.
A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits
NASA Astrophysics Data System (ADS)
Huber, E.; Huggenberger, P.; Caers, J.
2016-12-01
There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.
Ecohydrological implications of aeolian sediment trapping by sparse vegetation in drylands
Gonzales, Howell B.; Ravi, Sujith; Li, Junran; Sankey, Joel B.
2018-01-01
Aeolian processes are important drivers of ecosystem dynamics in drylands, and important feedbacks exist among aeolian – hydrological processes and vegetation. The trapping of wind-borne sediments by vegetation may result in changes in soil properties beneath the vegetation, which, in turn, can alter hydrological and biogeochemical processes. Despite the relevance of aeolian transport to ecosystem dynamics, the interactions between aeolian transport and vegetation in shaping dryland landscapes where sediment distribution is altered by relatively rapid changes in vegetation composition such as shrub encroachment, is not well understood. Here, we used a computational fluid dynamics (CFD) modeling framework to investigate the sediment trapping efficiencies of vegetation canopies commonly found in a shrub-grass ecotone in the Chihuahuan Desert (New Mexico, USA) and related the results to spatial heterogeneity in soil texture and infiltration measured in the field. A CFD open-source software package was used to simulate aeolian sediment movement through three-dimensional architectural depictions of Creosote shrub (Larrea tridentata) and Black Grama grass (Bouteloua eriopoda) vegetation types. The vegetation structures were created using a computer-aided design software (Blender), with inherent canopy porosities, which were derived using LIDAR (Light Detection and Ranging) measurements of plant canopies. Results show that considerable heterogeneity in infiltration and soil grain size distribution exist between the microsites, with higher infiltration and coarser soil texture under shrubs. Numerical simulations also indicate that the differential trapping of canopies might contribute to the observed heterogeneity in soil texture. In the early stages of encroachment, the shrub canopies, by trapping coarser particles more efficiently, might maintain higher infiltration rates leading to faster development of the microsites (among other factors) with enhanced ecological productivity, which might provide positive feedbacks to shrub encroachment.
Dai, D; Barranco, F T; Illangasekare, T H
2001-12-15
Research on the use of partitioning and interfacial tracers has led to the development of techniques for estimating subsurface NAPL amount and NAPL-water interfacial area. Although these techniques have been utilized with some success at field sites, current application is limited largely to NAPL at residual saturation, such as for the case of post-remediation settings where mobile NAPL has been removed through product recovery. The goal of this study was to fundamentally evaluate partitioning and interfacial tracer behavior in controlled column-scale test cells for a range of entrapment configurations varying in NAPL saturation, with the results serving as a determinant of technique efficacy (and design protocol) for use with complexly distributed NAPLs, possibly at high saturation, in heterogeneous aquifers. Representative end members of the range of entrapment configurations observed under conditions of natural heterogeneity (an occurrence with residual NAPL saturation [discontinuous blobs] and an occurrence with high NAPL saturation [continuous free-phase LNAPL lens]) were evaluated. Study results indicated accurate prediction (using measured tracer retardation and equilibrium-based computational techniques) of NAPL amount and NAPL-water interfacial area for the case of residual NAPL saturation. For the high-saturation LNAPL lens, results indicated that NAPL-water interfacial area, but not NAPL amount (underpredicted by 35%), can be reasonably determined using conventional computation techniques. Underprediction of NAPL amount lead to an erroneous prediction of NAPL distribution, as indicated by the NAPL morphology index. In light of these results, careful consideration should be given to technique design and critical assumptions before applying equilibrium-based partitioning tracer methodology to settings where NAPLs are complexly entrapped, such as in naturally heterogeneous subsurface formations.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.
Russo, Lucia; Russo, Paola; Siettos, Constantinos I.
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire. PMID:27780249
Russo, Lucia; Russo, Paola; Siettos, Constantinos I
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
Documentary of MFENET, a national computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuttleworth, B.O.
1977-06-01
The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2014-07-01
Climate change may alter the spatial distribution, composition, structure and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate the solar radiation absorbed by individual plants in order to understand and predict their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming that crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the results of random distribution of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and can be included in vegetation models to simulate long-term transient responses of plant communities to climate change. The code and a user's manual are provided as Supplement of the paper.
A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach.
Lei, Shuangyan; Frank, Matthew C; Anderson, Donald D; Brown, Thomas D
The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are "stacked" inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate.
Exploiting the flexibility of a family of models for taxation and redistribution
NASA Astrophysics Data System (ADS)
Bertotti, M. L.; Modanese, G.
2012-08-01
We discuss a family of models expressed by nonlinear differential equation systems describing closed market societies in the presence of taxation and redistribution. We focus in particular on three example models obtained in correspondence to different parameter choices. We analyse the influence of the various choices on the long time shape of the income distribution. Several simulations suggest that behavioral heterogeneity among the individuals plays a definite role in the formation of fat tails of the asymptotic stationary distributions. This is in agreement with results found with different approaches and techniques. We also show that an excellent fit for the computational outputs of our models is provided by the κ-generalized distribution introduced by Kaniadakis in [Physica A 296, 405 (2001)].
Bernhardt, Peter
2016-01-01
Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529
Sato, Tatsuhiko; Masunaga, Shin-Ichiro; Kumada, Hiroaki; Hamada, Nobuyuki
2018-01-17
We here propose a new model for estimating the biological effectiveness for boron neutron capture therapy (BNCT) considering intra- and intercellular heterogeneity in 10 B distribution. The new model was developed from our previously established stochastic microdosimetric kinetic model that determines the surviving fraction of cells irradiated with any radiations. In the model, the probability density of the absorbed doses in microscopic scales is the fundamental physical index for characterizing the radiation fields. A new computational method was established to determine the probability density for application to BNCT using the Particle and Heavy Ion Transport code System PHITS. The parameters used in the model were determined from the measured surviving fraction of tumor cells administrated with two kinds of 10 B compounds. The model quantitatively highlighted the indispensable need to consider the synergetic effect and the dose dependence of the biological effectiveness in the estimate of the therapeutic effect of BNCT. The model can predict the biological effectiveness of newly developed 10 B compounds based on their intra- and intercellular distributions, and thus, it can play important roles not only in treatment planning but also in drug discovery research for future BNCT.
Carstens, Julienne L; Correa de Sampaio, Pedro; Yang, Dalu; Barua, Souptik; Wang, Huamin; Rao, Arvind; Allison, James P; LeBleu, Valerie S; Kalluri, Raghu
2017-04-27
The exact nature and dynamics of pancreatic ductal adenocarcinoma (PDAC) immune composition remains largely unknown. Desmoplasia is suggested to polarize PDAC immunity. Therefore, a comprehensive evaluation of the composition and distribution of desmoplastic elements and T-cell infiltration is necessary to delineate their roles. Here we develop a novel computational imaging technology for the simultaneous evaluation of eight distinct markers, allowing for spatial analysis of distinct populations within the same section. We report a heterogeneous population of infiltrating T lymphocytes. Spatial distribution of cytotoxic T cells in proximity to cancer cells correlates with increased overall patient survival. Collagen-I and αSMA + fibroblasts do not correlate with paucity in T-cell accumulation, suggesting that PDAC desmoplasia may not be a simple physical barrier. Further exploration of this technology may improve our understanding of how specific stromal composition could impact T-cell activity, with potential impact on the optimization of immune-modulatory therapies.
Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision
Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana
2014-01-01
We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933
NASA Astrophysics Data System (ADS)
Tsakiroglou, C. D.; Aggelopoulos, C. A.; Sygouni, V.
2009-04-01
A hierarchical, network-type, dynamic simulator of the immiscible displacement of water by oil in heterogeneous porous media is developed to simulate the rate-controlled displacement of two fluids at the soil column scale. A cubic network is constructed, where each node is assigned a permeability which is chosen randomly from a distribution function. The intensity of heterogeneities is quantified by the width of the permeability distribution function. The capillary pressure at each node is calculated by combining a generalized Leverett J-function with a Corey type model. Information about the heterogeneity of soils at the pore network scale is obtained by combining mercury intrusion porosimetry (MIP) data with back-scattered scanning electron microscope (BSEM) images [1]. In order to estimate the two-phase flow properties of nodes (relative permeability and capillary pressure functions, permeability distribution function) immiscible and miscible displacement experiments are performed on undisturbed soil columns. The transient responses of measured variables (pressure drop, fluid saturation averaged over five successive segments, solute concentration averaged over three cross-sections) are fitted with models accounting for the preferential flow paths at the micro- (multi-region model) and macro-scale (multi flowpath model) because of multi-scale heterogeneities [2,3]. Simulating the immiscible displacement of water by oil (drainage) in a large netork, at each time step, the fluid saturation and pressure of each node are calculated formulating mass balances at each node, accounting for capillary, viscous and gravity forces, and solving the system of coupled equations. At each iteration of the algorithm, the pressure drop is so selected that the total flow rate of the injected fluid is kept constant. The dynamic large-scale network simulator is used (1) to examine the sensitivity of the transient responses of the axial distribution of fluid saturation and total pressure drop across the network to the permeability distribution function, spatial correlations of permeability, and capillary number, and (2) to estimate the effective (up-scaled) relative permeability functions at the soil column scale. In an attempt to clarify potential effects of the permeability distribution and spatial permeability correlations on the transient responses of the pressure drop across a soil column, signal analysis with wavelets is performed [4] on experimental and simulated results. The transient variation of signal energy and frequency of pressure drop fluctuations at the wavelet domain are correlated with macroscopic properties such as the effective water and oil relative permeabilities of the porous medium, and microscopic properties such as the variation of the permeability distribution of oil-occupied nodes. Toward the solution of the inverse problem, a general procedure is suggested to identify macro-heterogeneities from the fast analysis of pressure drop signals. References 1. Tsakiroglou, C.D. and M.A. Ioannidis, "Dual porosity modeling of the pore structure and transport properties of a contaminated soil", Eur. J. Soil Sci., 59, 744-761 (2008). 2. Aggelopoulos, C.A., and C.D. Tsakiroglou, "Quantifying the Soil Heterogeneity from Solute Dispersion Experiments", Geoderma, 146, 412-424 (2008). 3. Aggelopoulos, C.A., and C.D. Tsakiroglou, "A multi-flow path approach to model immiscible displacement in undisturbed heterogeneous soil columns", J. Contam. Hydrol., in press (2009). 4. Sygouni, V., C.D. Tsakiroglou, and A.C. Payatakes, "Using wavelets to characterize the wettability of porous materials", Phys. Rev. E, 76, 056304 (2007).
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
2013-11-01
big data with R is relatively new. RHadoop is a mature product from Revolution Analytics that uses R with Hadoop Streaming [15] and provides...agnostic all- data summaries or computations, in which case we use MapReduce directly. 2.3 D&R Software Environment In this work, we use the Hadoop ...job scheduling and tracking, data distribu- tion, system architecture, heterogeneity, and fault-tolerance. Hadoop also provides a distributed key-value
Research on distributed heterogeneous data PCA algorithm based on cloud platform
NASA Astrophysics Data System (ADS)
Zhang, Jin; Huang, Gang
2018-05-01
Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.
A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU
NASA Astrophysics Data System (ADS)
Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha
2018-03-01
Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Aspect-related Vegetation Differences Amplify Soil Moisture Variability in Semiarid Landscapes
NASA Astrophysics Data System (ADS)
Yetemen, O.; Srivastava, A.; Kumari, N.; Saco, P. M.
2017-12-01
Soil moisture variability (SMV) in semiarid landscapes is affected by vegetation, soil texture, climate, aspect, and topography. The heterogeneity in vegetation cover that results from the effects of microclimate, terrain attributes (slope gradient, aspect, drainage area etc.), soil properties, and spatial variability in precipitation have been reported to act as the dominant factors modulating SMV in semiarid ecosystems. However, the role of hillslope aspect in SMV, though reported in many field studies, has not received the same degree of attention probably due to the lack of extensive large datasets. Numerical simulations can then be used to elucidate the contribution of aspect-driven vegetation patterns to this variability. In this work, we perform a sensitivity analysis to study on variables driving SMV using the CHILD landscape evolution model equipped with a spatially-distributed solar-radiation component that couples vegetation dynamics and surface hydrology. To explore how aspect-driven vegetation heterogeneity contributes to the SMV, CHILD was run using a range of parameters selected to reflect different scenarios (from uniform to heterogeneous vegetation cover). Throughout the simulations, the spatial distribution of soil moisture and vegetation cover are computed to estimate the corresponding coefficients of variation. Under the uniform spatial precipitation forcing and uniform soil properties, the factors affecting the spatial distribution of solar insolation are found to play a key role in the SMV through the emergence of aspect-driven vegetation patterns. Hence, factors such as catchment gradient, aspect, and latitude, define water stress and vegetation growth, and in turn affect the available soil moisture content. Interestingly, changes in soil properties (porosity, root depth, and pore-size distribution) over the domain are not as effective as the other factors. These findings show that the factors associated to aspect-related vegetation differences amplify the soil moisture variability of semi-arid landscapes.
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill
2000-01-01
We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.
Howell, Bryan; McIntyre, Cameron C
2016-06-01
Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Sriperumbudur, Kiran Kumar; Pau, Hans Wilhelm; van Rienen, Ursula
2018-03-01
Electric stimulation of the auditory nerve by cochlear implants has been a successful clinical intervention to treat the sensory neural deafness. In this pathological condition of the cochlea, type-1 spiral ganglion neurons in Rosenthal's canal play a vital role in the action potential initiation. Various morphological studies of the human temporal bones suggest that the spiral ganglion neurons are surrounded by heterogeneous structures formed by a variety of cells and tissues. However, the existing simulation models have not considered the tissue heterogeneity in the Rosenthal's canal while studying the electric field interaction with spiral ganglion neurons. Unlike the existing models, we have implemented the tissue heterogeneity in the Rosenthal's canal using a computationally inexpensive image based method in a two-dimensional finite element model. Our simulation results suggest that the spatial heterogeneity of surrounding tissues influences the electric field distribution in the Rosenthal's canal, and thereby alters the transmembrane potential of the spiral ganglion neurons. In addition to the academic interest, these results are especially useful to understand how the latest tissue regeneration methods such as gene therapy and drug-induced resprouting of peripheral axons, which probably modify the density of the tissues in the Rosenthal's canal, affect the cochlear implant functionality.
Mitsuhashi, Kenji; Poudel, Joemini; Matthews, Thomas P.; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-01-01
Photoacoustic computed tomography (PACT) is an emerging imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to an inverse source problem in which the initial pressure distribution is recovered from measurements of the radiated wavefield. A major challenge in transcranial PACT brain imaging is compensation for aberrations in the measured data due to the presence of the skull. Ultrasonic waves undergo absorption, scattering and longitudinal-to-shear wave mode conversion as they propagate through the skull. To properly account for these effects, a wave-equation-based inversion method should be employed that can model the heterogeneous elastic properties of the skull. In this work, a forward model based on a finite-difference time-domain discretization of the three-dimensional elastic wave equation is established and a procedure for computing the corresponding adjoint of the forward operator is presented. Massively parallel implementations of these operators employing multiple graphics processing units (GPUs) are also developed. The developed numerical framework is validated and investigated in computer19 simulation and experimental phantom studies whose designs are motivated by transcranial PACT applications. PMID:29387291
Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy
NASA Astrophysics Data System (ADS)
Garbey, M.; Salmon, R.; Thanoon, D.; Bass, B. L.
2013-07-01
Surgery for early stage breast carcinoma is either total mastectomy (complete breast removal) or surgical lumpectomy (only tumor removal). The lumpectomy or partial mastectomy is intended to preserve a breast that satisfies the woman's cosmetic, emotional and physical needs. But in a fairly large number of cases the cosmetic outcome is not satisfactory. Today, predicting that surgery outcome is essentially based on heuristic. Modeling such a complex process must encompass multiple scales, in space from cells to tissue, as well as in time, from minutes for the tissue mechanics to months for healing. The goal of this paper is to present a first step in multiscale modeling of the long time scale prediction of breast shape after tumor resection. This task requires coupling very different mechanical and biological models with very different computing needs. We provide a simple illustration of the application of heterogeneous distributed computing and modular software design to speed up the model development. Our computational framework serves currently to test hypothesis on breast tissue healing in a pilot study with women who have been elected to undergo BCT and are being treated at the Methodist Hospital in Houston, TX.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.
Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.
Mohammadi, M; Chen, P
2015-09-01
Solid tumors with different microvascular densities (MVD) have been shown to have different outcomes in clinical studies. Other studies have demonstrated the significant correlation between high MVD, elevated interstitial fluid pressure (IFP) and metastasis in cancers. Elevated IFP in solid tumors prevents drug macromolecules reaching most cancerous cells. To overcome this barrier, antiangiogenesis drugs can reduce MVD within the tumor and lower IFP. A quantitative approach is essential to compute how much reduction in MVD is required for a specific tumor to reach a desired amount of IFP for drug delivery purposes. Here we provide a computational framework to investigate how IFP is affected by the tumor size, the MVD, and location of vessels within the tumor. A general physiologically relevant tumor type with a heterogenous vascular structure surrounded by normal tissue is utilized. Then the continuity equation, Darcy's law, and Starling's equation are applied in the continuum mechanics model, which can calculate IFP for different cases of solid tumors. High MVD causes IFP elevation in solid tumors, and IFP distribution correlates with microvascular distribution within tumor tissue. However, for tumors with constant MVD but different microvascular structures, the average values of IFP were found to be the same. Moreover, for a constant MVD and vascular distribution, an increase in tumor size leads to increased IFP. Copyright © 2015 Elsevier Inc. All rights reserved.
Mixed-mode oscillations and population bursting in the pre-Bötzinger complex
Bacak, Bartholomew J; Kim, Taegyo; Smith, Jeffrey C; Rubin, Jonathan E; Rybak, Ilya A
2016-01-01
This study focuses on computational and theoretical investigations of neuronal activity arising in the pre-Bötzinger complex (pre-BötC), a medullary region generating the inspiratory phase of breathing in mammals. A progressive increase of neuronal excitability in medullary slices containing the pre-BötC produces mixed-mode oscillations (MMOs) characterized by large amplitude population bursts alternating with a series of small amplitude bursts. Using two different computational models, we demonstrate that MMOs emerge within a heterogeneous excitatory neural network because of progressive neuronal recruitment and synchronization. The MMO pattern depends on the distributed neuronal excitability, the density and weights of network interconnections, and the cellular properties underlying endogenous bursting. Critically, the latter should provide a reduction of spiking frequency within neuronal bursts with increasing burst frequency and a dependence of the after-burst recovery period on burst amplitude. Our study highlights a novel mechanism by which heterogeneity naturally leads to complex dynamics in rhythmic neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.13403.001 PMID:26974345
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Cellular burdens and biological effects on tissue level caused by inhaled radon progenies.
Madas, B G; Balásházy, I; Farkas, Á; Szoke, I
2011-02-01
In the case of radon exposure, the spatial distribution of deposited radioactive particles is highly inhomogeneous in the central airways. The object of this research is to investigate the consequences of this heterogeneity regarding cellular burdens in the bronchial epithelium and to study the possible biological effects at tissue level. Applying computational fluid and particle dynamics techniques, the deposition distribution of inhaled radon daughters has been determined in a bronchial airway model for 23 min of work in the New Mexico uranium mine corresponding to 0.0129 WLM exposure. A numerical epithelium model based on experimental data has been utilised in order to quantify cellular hits and doses. Finally, a carcinogenesis model considering cell death-induced cell-cycle shortening has been applied to assess the biological responses. Present computations reveal that cellular dose may reach 1.5 Gy, which is several orders of magnitude higher than tissue dose. The results are in agreement with the histological finding that the uneven deposition distribution of radon progenies may lead to inhomogeneous spatial distribution of tumours in the bronchial airways. In addition, at the macroscopic level, the relationship between cancer risk and radiation burden seems to be non-linear.
2005-06-01
virtualisation of distributed computing and data resources such as processing, network bandwidth, and storage capacity, to create a single system...and Simulation (M&S) will be integrated into this heterogeneous SOA. M&S functionality will be available in the form of operational M&S services. One...documents defining net centric warfare, the use of M&S functionality is a common theme. Alberts and Hayes give a good overview on net centric operations
Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)
2002-01-01
To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.
Heterogenous database integration in a physician workstation.
Annevelink, J; Young, C Y; Tang, P C
1991-01-01
We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema.
Heterogenous database integration in a physician workstation.
Annevelink, J.; Young, C. Y.; Tang, P. C.
1991-01-01
We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema. PMID:1807624
AWE-WQ: fast-forwarding molecular dynamics using the accelerated weighted ensemble.
Abdul-Wahid, Badi'; Feng, Haoyun; Rajan, Dinesh; Costaouec, Ronan; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2014-10-27
A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.
Heterogeneous scalable framework for multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Karla Vanessa
2013-09-01
Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computermore » platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.« less
AWE-WQ: Fast-Forwarding Molecular Dynamics Using the Accelerated Weighted Ensemble
2015-01-01
A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy. PMID:25207854
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
Gene Signal Distribution and HER2 Amplification in Gastroesophageal Cancer.
Jørgensen, Jan Trøst; Nielsen, Karsten Bork; Kjærsgaard, Gitte; Jepsen, Anna; Mollerup, Jens
2017-01-01
Background : HER2 serves as an important therapeutic target in gastroesophageal cancer. Differences in HER2 gene signal distribution patterns can be observed at the tissue level, but how it influences the HER2 amplification status has not been studied so far. Here, we investigated the link between HER2 amplification and the different types of gene signal distribution. Methods : Tumor samples from 140 patients with gastroesophageal adenocarcinoma where analyzed using the HER2 IQFISH pharmDx™ assay. Specimens covered non-amplified and amplified cases with a preselected high proportion of HER2 amplified cases. Based on the HER2 /CEN-17 ratio, specimens were categorized into amplified or non-amplified. The signal distribution patterns were divided into homogeneous, heterogeneous focal or heterogeneous mosaic. The study was conducted based on anonymized specimens with limited access to clinicopathological data. Results: Among the 140 analyzed specimens 83 had a heterogeneous HER2 signal distribution, with 62 being focal and 21 of the mosaic type. The remaining 57 specimens had a homogeneous signal distribution. HER2 amplification was observed in 63 of the 140 specimens, and nearly all (93.7%) were found among specimens with a heterogeneous focal signal distribution (p<0.0001). The mean HER2 /CEN-17 ratio for the focal heterogeneous group was 8.75 (CI95%: 6.87 - 10.63), compared to 1.53 (CI95%: 1.45 - 1.61) and 1.70 (CI95%: 1.22 - 2.18) for the heterogeneous mosaic and homogeneous groups, respectively, (p<0.0001). Conclusions: A clear relationship between HER2 amplification and the focal heterogeneous signal distribution was demonstrated in tumor specimens from patients with gastroesophageal cancer. Furthermore, we raise the hypothesis that the signal distribution patterns observed with FISH might be related to different subpopulations of HER2 positive tumor cells.
Spagnolo, Daniel M; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M; Lezon, Timothy R; Gough, Albert; Meyer, Dan E; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2016-01-01
Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression.
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark
2016-01-01
Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a workflow for analysis of heterogeneity in large scale biology projects. PMID:26476369
A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach
Lei, Shuangyan; Frank, Matthew C.; Anderson, Donald D.; Brown, Thomas D.
2015-01-01
Purpose The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Design/methodology/approach Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are “stacked” inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. Findings The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. Originality/Value This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate. PMID:26120277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambon, Ilaria, E-mail: ilaria.zambon@unitus.it; Colantoni, Andrea; Carlucci, Margherita
Land Degradation (LD) in socio-environmental systems negatively impacts sustainable development paths. This study proposes a framework to LD evaluation based on indicators of diversification in the spatial distribution of sensitive land. We hypothesize that conditions for spatial heterogeneity in a composite index of land sensitivity are more frequently associated to areas prone to LD than spatial homogeneity. Spatial heterogeneity is supposed to be associated with degraded areas that act as hotspots for future degradation processes. A diachronic analysis (1960–2010) was performed at the Italian agricultural district scale to identify environmental factors associated with spatial heterogeneity in the degree of landmore » sensitivity to degradation based on the Environmentally Sensitive Area Index (ESAI). In 1960, diversification in the level of land sensitivity measured using two common indexes of entropy (Shannon's diversity and Pielou's evenness) increased significantly with the ESAI, indicating a high level of land sensitivity to degradation. In 2010, surface area classified as “critical” to LD was the highest in districts with diversification in the spatial distribution of ESAI values, confirming the hypothesis formulated above. Entropy indexes, based on observed alignment with the concept of LD, constitute a valuable base to inform mitigation strategies against desertification. - Highlights: • Spatial heterogeneity is supposed to be associated with degraded areas. • Entropy indexes can inform mitigation strategies against desertification. • Assessing spatial diversification in the degree of land sensitivity to degradation. • Mediterranean rural areas have an evident diversity in agricultural systems. • A diachronic analysis carried out at the Italian agricultural district scale.« less
PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes
NASA Astrophysics Data System (ADS)
Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.
2017-12-01
Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun
2004-04-01
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
Application-Level Interoperability Across Grids and Clouds
NASA Astrophysics Data System (ADS)
Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh
Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Grid and Cloud for Developing Countries
NASA Astrophysics Data System (ADS)
Petitdidier, Monique
2014-05-01
The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco
2017-04-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, K.; Hamlington, P.; Pinardi, N.; Zavatarelli, M.; Milliff, R. F.
2016-12-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions which can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parametrizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17). This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time Series and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
Research into a distributed fault diagnosis system and its application
NASA Astrophysics Data System (ADS)
Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei
2005-12-01
CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.
Long-range Ising model for credit portfolios with heterogeneous credit exposures
NASA Astrophysics Data System (ADS)
Kato, Kensuke
2016-11-01
We propose the finite-size long-range Ising model as a model for heterogeneous credit portfolios held by a financial institution in the view of econophysics. The model expresses the heterogeneity of the default probability and the default correlation by dividing a credit portfolio into multiple sectors characterized by credit rating and industry. The model also expresses the heterogeneity of the credit exposure, which is difficult to evaluate analytically, by applying the replica exchange Monte Carlo method to numerically calculate the loss distribution. To analyze the characteristics of the loss distribution for credit portfolios with heterogeneous credit exposures, we apply this model to various credit portfolios and evaluate credit risk. As a result, we show that the tail of the loss distribution calculated by this model has characteristics that are different from the tail of the loss distribution of the standard models used in credit risk modeling. We also show that there is a possibility of different evaluations of credit risk according to the pattern of heterogeneity.
Interstitial fluid flow and drug delivery in vascularized tumors: a computational model.
Welter, Michael; Rieger, Heiko
2013-01-01
Interstitial fluid is a solution that bathes and surrounds the human cells and provides them with nutrients and a way of waste removal. It is generally believed that elevated tumor interstitial fluid pressure (IFP) is partly responsible for the poor penetration and distribution of therapeutic agents in solid tumors, but the complex interplay of extravasation, permeabilities, vascular heterogeneities and diffusive and convective drug transport remains poorly understood. Here we consider-with the help of a theoretical model-the tumor IFP, interstitial fluid flow (IFF) and its impact upon drug delivery within tumor depending on biophysical determinants such as vessel network morphology, permeabilities and diffusive vs. convective transport. We developed a vascular tumor growth model, including vessel co-option, regression, and angiogenesis, that we extend here by the interstitium (represented by a porous medium obeying Darcy's law) and sources (vessels) and sinks (lymphatics) for IFF. With it we compute the spatial variation of the IFP and IFF and determine its correlation with the vascular network morphology and physiological parameters like vessel wall permeability, tissue conductivity, distribution of lymphatics etc. We find that an increased vascular wall conductivity together with a reduction of lymph function leads to increased tumor IFP, but also that the latter does not necessarily imply a decreased extravasation rate: Generally the IF flow rate is positively correlated with the various conductivities in the system. The IFF field is then used to determine the drug distribution after an injection via a convection diffusion reaction equation for intra- and extracellular concentrations with parameters guided by experimental data for the drug Doxorubicin. We observe that the interplay of convective and diffusive drug transport can lead to quite unexpected effects in the presence of a heterogeneous, compartmentalized vasculature. Finally we discuss various strategies to increase drug exposure time of tumor cells.
NASA Astrophysics Data System (ADS)
Voutilainen, Mikko; Kekäläinen, Pekka; Siitari-Kauppi, Marja; Sardini, Paul; Muuri, Eveliina; Timonen, Jussi; Martin, Andrew
2017-11-01
Transport and retardation of cesium in Grimsel granodiorite taking into account heterogeneity of mineral and pore structure was studied using rock samples overcored from an in situ diffusion test at the Grimsel Test Site. The field test was part of the Long-Term Diffusion (LTD) project designed to characterize retardation properties (diffusion and distribution coefficients) under in situ conditions. Results of the LTD experiment for cesium showed that in-diffusion profiles and spatial concentration distributions were strongly influenced by the heterogeneous pore structure and mineral distribution. In order to study the effect of heterogeneity on the in-diffusion profile and spatial concentration distribution, a Time Domain Random Walk (TDRW) method was applied along with a feature for modeling chemical sorption in geological materials. A heterogeneous mineral structure of Grimsel granodiorite was constructed using X-ray microcomputed tomography (X-μCT) and the map was linked to previous results for mineral specific porosities and distribution coefficients (Kd) that were determined using C-14-PMMA autoradiography and batch sorption experiments, respectively. After this the heterogeneous structure contains information on local porosity and Kd in 3-D. It was found that the heterogeneity of the mineral structure on the micrometer scale affects significantly the diffusion and sorption of cesium in Grimsel granodiorite at the centimeter scale. Furthermore, the modeled in-diffusion profiles and spatial concentration distributions show similar shape and pattern to those from the LTD experiment. It was concluded that the use of detailed structure characterization and quantitative data on heterogeneity can significantly improve the interpretation and evaluation of transport experiments.
Time-resolved Sensing of Meso-scale Shock Compression with Multilayer Photonic Crystal Structures
NASA Astrophysics Data System (ADS)
Scripka, David; Lee, Gyuhyon; Summers, Christopher J.; Thadhani, Naresh
2017-06-01
Multilayer Photonic Crystal structures can provide spatially and temporally resolved data needed to validate theoretical and computational models relevant for understanding shock compression in heterogeneous materials. Two classes of 1-D photonic crystal multilayer structures were studied: optical microcavities (OMC) and distributed Bragg reflectors (DBR). These 0.5 to 5 micron thick structures were composed of SiO2, Al2O3, Ag, and PMMA layers fabricated primarily via e-beam evaporation. The multilayers have unique spectral signatures inherently linked to their time-resolved physical states. By observing shock-induced changes in these signatures, an optically-based pressure sensor was developed. Results to date indicate that both OMCs and DBRs exhibit nanosecond-resolved spectral shifts of several to 10s of nanometers under laser-driven shock compression loads of 0-10 GPa, with the magnitude of the shift strongly correlating to the shock load magnitude. Additionally, spatially and temporally resolved spectral shifts under heterogeneous laser-driven shock compression created by partial beam blocking have been successfully demonstrated. These results illustrate the potential for multilayer structures to serve as meso-scale sensors, capturing temporal and spatial pressure profile evolutions in shock-compressed heterogeneous materials, and revealing meso-scale pressure distributions across a shocked surface. Supported by DTRA Grant HDTRA1-12-1-005 and DoD, AFOSR, National Defense Science and Eng. Graduate Fellowship, 32 CFR 168a.
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
Xie, Jibo; Li, Guoqing
2015-04-01
Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.
Dähring, H; Grandke, J; Teichgräber, U; Hilger, I
2015-12-01
Heterogeneous magnetic nanoparticle (MNP) distributions within tumors can cause regions of temperature under dosage and reduce the therapeutic efficiency. Here, micro-computed tomography (CT) imaging was used as a tool to determine the MNP distribution in vivo. The therapeutic success was evaluated based on tumor volume and temperature distribution. Tumor-bearing mice were intratumorally injected with iron oxide particles. MNP distribution was assessed by micro-CT with a low radiation dose protocol. MNPs were clearly visible, and the exact distribution to nontumor structures was detected by micro-CT. Knowledge of the intratumoral MNP distribution allowed the generation of higher temperatures within the tumor and led to higher temperature values after exposure to an alternating magnetic field (AMF). Consequently, the tumor size after 28 days was reduced to 14 and 73 % of the initial tumor volume for the MNP/AMF/CT and MNP/AMF groups, respectively. The MNP distribution pattern mainly governed the generated temperature spots in the tumor. Knowing the MNP distribution enabled individualized hyperthermia treatment and improved the overall therapeutic efficiency.
State-of-the-art in Heterogeneous Computing
Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...
2010-01-01
Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less
Linking Microstructural Changes to Bulk Behavior in Shear Disordered Matter
NASA Astrophysics Data System (ADS)
Blair, Daniel
Soft and biological materials often exhibit disordered and heterogeneous microstructure. In most cases, the transmission and distribution of stresses through these complex materials reflects their inherent heterogeneity. Through the combination of rheology and 4D imaging we can directly alter and quantify the connection between microstructure and local stresses. We subject soft and biological materials to precise shear deformations while measuring real space information about the distribution and redistribution of the applied stress.In this talk, I will focus on the flow behavior of two distinct but related disordered materials; a flowing compressed emulsion above its yield stress and a strained collagen network. In the emulsion system, I will present experimental and computational results on the dynamical response, at the level of individual droplets, that directly links the particle motion and deformation to the rheology. I will also present results that utilize boundary stress microscopy to quantify the spatial distribution of surface stresses that arise from sheared in-vitro collagen networks. I will outline our main conclusions which is that the strain stiffening behavior observed in collagen networks can be parameterized by a single characteristic strain and associated stress. This characteristic rheological signature seems to describe both the strain stiffening regime and network yielding. NSF DMR: 0847490.
Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach
NASA Astrophysics Data System (ADS)
Comolli, Alessandro; Dentz, Marco
2017-09-01
We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in heterogeneous natural and engineered porous materials. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Maltese, Matthew R; Margulies, Susan S
2016-11-01
The finite element (FE) brain model is used increasingly as a design tool for developing technology to mitigate traumatic brain injury. We developed an ultra high-definition FE brain model (>4 million elements) from CT and MRI scans of a 2-month-old pre-adolescent piglet brain, and simulated rapid head rotations. Strain distributions in the thalamus, coronal radiata, corpus callosum, cerebral cortex gray matter, brainstem and cerebellum were evaluated to determine the influence of employing homogeneous brain moduli, or distinct experimentally derived gray and white matter property representations, where some white matter regions are stiffer and others less stiff than gray matter. We find that constitutive heterogeneity significantly lowers white matter deformations in all regions compared with homogeneous properties, and should be incorporated in FE model injury prediction.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data
Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191
Average is Boring: How Similarity Kills a Meme's Success
NASA Astrophysics Data System (ADS)
Coscia, Michele
2014-09-01
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.
Average is boring: how similarity kills a meme's success.
Coscia, Michele
2014-09-26
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.
NASA Astrophysics Data System (ADS)
Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier
2010-11-01
Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Evidence for minimal oxygen heterogeneity in the healthy human pulmonary acinus
Tawhai, Merryn H.
2011-01-01
It has been suggested that the human pulmonary acinus operates at submaximal efficiency at rest due to substantial spatial heterogeneity in the oxygen partial pressure (Po2) in alveolar air within the acinus. Indirect measurements of alveolar air Po2 could theoretically mask significant heterogeneity if intra-acinar perfusion is well matched to Po2. To investigate the extent of intra-acinar heterogeneity, we developed a computational model with anatomically based structure and biophysically based equations for gas exchange. This model yields a quantitative prediction of the intra-acinar O2 distribution that cannot be measured directly. Temporal and spatial variations in Po2 in the intra-acinar air and blood are predicted with the model. The model, representative of a single average acinus, has an asymmetric multibranching respiratory airways geometry coupled to a symmetric branching conducting airways geometry. Advective and diffusive O2 transport through the airways and gas exchange into the capillary blood are incorporated. The gas exchange component of the model includes diffusion across the alveolar air-blood membrane and O2-hemoglobin binding. Contrary to previous modeling studies, simulations show that the acinus functions extremely effectively at rest, with only a small degree of intra-acinar Po2 heterogeneity. All regions of the model acinus, including the peripheral generations, maintain a Po2 >100 mmHg. Heterogeneity increases slightly when the acinus is stressed by exercise. However, even during exercise the acinus retains a reasonably homogeneous gas phase. PMID:21071589
A Virtual Science Data Environment for Carbon Dioxide Observations
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Law, E.; Crichton, D. J.; Mattmann, C. A.; Gunson, M. R.; Braverman, A. J.; Nguyen, H. M.; Eldering, A.; Castano, R.; Osterman, G. B.
2011-12-01
Climate science data are often distributed cross-institutionally and made available using heterogeneous interfaces. With respect to observational carbon-dioxide (CO2) records, these data span across national as well as international institutions and are typically distributed using a variety of data standards. Such an arrangement can yield challenges from a research perspective, as users often need to independently aggregate datasets as well as address the issue of data quality. To tackle this dispersion and heterogeneity of data, we have developed the CO2 Virtual Science Data Environment - a comprehensive approach to virtually integrating CO2 data and metadata from multiple missions and providing a suite of computational services that facilitate analysis, comparison, and transformation of that data. The Virtual Science Environment provides climate scientists with a unified web-based destination for discovering relevant observational data in context, and supports a growing range of online tools and services for analyzing and transforming the available data to suit individual research needs. It includes web-based tools to geographically and interactively search for CO2 observations collected from multiple airborne, space, as well as terrestrial platforms. Moreover, the data analysis services it provides over the Internet, including offering techniques such as bias estimation and spatial re-gridding, move computation closer to the data and reduce the complexity of performing these operations repeatedly and at scale. The key to enabling these services, as well as consolidating the disparate data into a unified resource, has been to focus on leveraging metadata descriptors as the foundation of our data environment. This metadata-centric architecture, which leverages the Dublin Core standard, forgoes the need to replicate remote datasets locally. Instead, the system relies upon an extensive, metadata-rich virtual data catalog allowing on-demand browsing and retrieval of CO2 records from multiple missions. In other words, key metadata information about remote CO2 records is stored locally while the data itself is preserved at its respective archive of origin. This strategy has been made possible by our method of encapsulating the heterogeneous sources of data using a common set of web-based services, including services provided by Jet Propulsion Laboratory's Climate Data Exchange (CDX). Furthermore, this strategy has enabled us to scale across missions, and to provide access to a broad array of CO2 observational data. Coupled with on-demand computational services and an intuitive web-portal interface, the CO2 Virtual Science Data Environment effectively transforms heterogeneous CO2 records from multiple sources into a unified resource for scientific discovery.
Radiation efficiency during slow crack propagation: an experimental study.
NASA Astrophysics Data System (ADS)
Jestin, Camille; Lengliné, Olivier; Schmittbuhl, Jean
2017-04-01
Creeping faults are known to host a significant aseismic deformation. However, the observations of micro-earthquake activity related to creeping faults (e.g. San Andreas Faults, North Anatolian Fault) suggest the presence of strong lateral variabilities of the energy partitioning between radiated and fracture energies. The seismic over aseismic slip ratio is rather difficult to image over time and at depth because of observational limitations (spatial resolution, sufficiently broad band instruments, etc.). In this study, we aim to capture in great details the energy partitioning during the slow propagation of mode I fracture along a heterogeneous interface, where the toughness is strongly varying in space.We lead experiments at laboratory scale on a rock analog model (PMMA) enabling a precise monitoring of fracture pinning and depinning on local asperities in the brittle-creep regime. Indeed, optical imaging through the transparent material allows the high resolution description of the fracture front position and velocity during its propagation. At the same time, acoustic emissions are also measured by accelerometers positioned around the rupture. Combining acoustic records, measurements of the crack front position and the loading curve, we compute the total radiated energy and the fracture energy. We deduce from them the radiation efficiency, ηR, characterizing the proportion of the available energy that is radiated in form of seismic wave. We show an increase of ηR with the crack rupture speed computed for each of our experiments in the sub-critical crack propagation domain. Our experimental estimates of ηR are larger than the theoretical model proposed by Freund, stating that the radiation efficiency of crack propagation in homogeneous media is proportional to the crack velocity. Our results are demonstrated to be in agreement with existing studies which showed that the distribution of crack front velocity in a heterogeneous medium can be well described by a power-law decay function above the average fracture front speed, ⟨v⟩, and then establishing a relation of the type ηR ∝⟨v ⟩0.55. These observations suggest that the radiation efficiency in heterogeneous media is defined by a power law involving a lower exponent value than the one predicted for a homogeneous media, but is sensitive to the shape of the velocity distribution of the heterogeneous interface. Finally, when studying the case of similar events observed in natural conditions, such as seismic swarms associated to slow slip along a fault, we notice a good agreement between our results and the radiation efficiency computed for these field data.
Vistica, Jennifer; Dam, Julie; Balbo, Andrea; Yikilmaz, Emine; Mariuzza, Roy A; Rouault, Tracey A; Schuck, Peter
2004-03-15
Sedimentation equilibrium is a powerful tool for the characterization of protein self-association and heterogeneous protein interactions. Frequently, it is applied in a configuration with relatively long solution columns and with equilibrium profiles being acquired sequentially at several rotor speeds. The present study proposes computational tools, implemented in the software SEDPHAT, for the global analysis of equilibrium data at multiple rotor speeds with multiple concentrations and multiple optical detection methods. The detailed global modeling of such equilibrium data can be a nontrivial computational problem. It was shown previously that mass conservation constraints can significantly improve and extend the analysis of heterogeneous protein interactions. Here, a method for using conservation of mass constraints for the macromolecular redistribution is proposed in which the effective loading concentrations are calculated from the sedimentation equilibrium profiles. The approach is similar to that described by Roark (Biophys. Chem. 5 (1976) 185-196), but its utility is extended by determining the bottom position of the solution columns from the macromolecular redistribution. For analyzing heterogeneous associations at multiple protein concentrations, additional constraints that relate the effective loading concentrations of the different components or their molar ratio in the global analysis are introduced. Equilibrium profiles at multiple rotor speeds also permit the algebraic determination of radial-dependent baseline profiles, which can govern interference optical ultracentrifugation data, but usually also occur, to a smaller extent, in absorbance optical data. Finally, the global analysis of equilibrium profiles at multiple rotor speeds with implicit mass conservation and computation of the bottom of the solution column provides an unbiased scale for determining molar mass distributions of noninteracting species. The properties of these tools are studied with theoretical and experimental data sets.
Nakamura, Ryoji; Kachi, N; Suzuki, J-I
2010-05-01
We investigated the growth of and soil exploration by Lolium perenne under a heterogeneous environment before its roots reached a nutrient-rich patch. Temporal changes in the distribution of inorganic nitrogen, i.e., NO(3)(-)-N and NH(4)(+)-N, in the heterogeneous environment during the experimental period were also examined. The results showed that roots randomly explored soil, irrespective of the patchy distribution of inorganic nitrogen and differences in the chemical composition of inorganic nitrogen distribution between heterogeneous and homogeneous environments. We have also elucidated the potential effects of patch duration and inorganic nitrogen distribution on soil exploration by roots and thus on plant growth.
Porting AMG2013 to Heterogeneous CPU+GPU Nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samfass, Philipp
LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. Onemore » of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Hongkyu
The purpose of the project was to perform multiscale characterization of low permeability rocks to determine the effect of physical and chemical heterogeneity on the poromechanical and flow responses of shales and carbonate rocks with a broad range of physical and chemical heterogeneity . An integrated multiscale imaging of shale and carbonate rocks from nanometer to centimeter scales include s dual focused ion beam - scanning electron microscopy (FIB - SEM) , micro computed tomography (micro - CT) , optical and confocal microscopy, and 2D and 3D energy dispersive spectroscopy (EDS). In addition, mineralogical mapping and backscattered imaging with nanoindentationmore » testing advanced the quantitative evaluat ion of the relationship between material heterogeneity and mechanical behavior. T he spatial distribution of compositional heterogeneity, anisotropic bedding patterns, and mechanical anisotropy were employed as inputs for brittle fracture simulations using a phase field model . Comparison of experimental and numerical simulations reveal ed that proper incorporation of additional material information, such as bedding layer thickness and other geometrical attributes of the microstructures, can yield improvements on the numerical prediction of the mesoscale fracture patterns and hence the macroscopic effective toughness. Overall, a comprehensive framework to evaluate the relationship between mechanical response and micro-lithofacial features can allow us to make more accurate prediction of reservoir performance by developing a multi - scale understanding of poromechanical response to coupled chemical and mechanical interactions for subsurface energy related activities.« less
Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer.
Tsuruta, Yusuke; Nakata, Manabu; Nakamura, Mitsuhiro; Matsuo, Yukinori; Higashimura, Kyoji; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro
2014-08-01
To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. The results from AXB and XVMC agreed with measurements within ± 3.0% for the lung-equivalent phantom with a 6 × 6 cm(2) field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ± 3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124-358 s), 66.1 ± 16.0 s (range, 42-94 s), and 6.7 ± 1.1 s (range, 5-9 s) for XVMC, AXB, and AAA, respectively. In the phantom evaluations, AXB and XVMC agreed better with measurements than did AAA. Calculations differed in the density-changing zones (substance boundaries) between AXB/XVMC and AAA. In the lung SBRT cases, a comparative analysis of dose-volumetric data and dose distributions with XVMC demonstrated that the AXB provided better agreement with XVMC than AAA. The computation time of AXB was faster than that of XVMC; therefore, AXB has better balance in terms of the dosimetric performance and computation speed for clinical use than XVMC.
Accelerating Subsurface Transport Simulation on Heterogeneous Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino
Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy,more » LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272 and a Tesla M2090 per node.« less
WebGIS based on semantic grid model and web services
NASA Astrophysics Data System (ADS)
Zhang, WangFei; Yue, CaiRong; Gao, JianGuo
2009-10-01
As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.
Almendro, Vanessa; Cheng, Yu-Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege G; Helland, Aslaug; Rye, Inga H; Borresen-Dale, Anne-Lise; Maruyama, Reo; van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia
2014-02-13
Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and posttreatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Almendro, Vanessa; Cheng, Yu-Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege G.; Helland, Åslaug; Rye, Inga H.; Borresen-Dale, Anne-Lise; Maruyama, Reo; van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L.; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia
2014-01-01
SUMMARY Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor subtype-specific and it did not change during treatment in tumors with partial or no response. However, lower pre-treatment genetic diversity was significantly associated with complete pathologic response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution. PMID:24462293
Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; ...
2014-02-01
Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1991-01-01
Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT
2012-01-01
Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129
Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories
Neu, Scott C.; Crawford, Karen L.; Toga, Arthur W.
2012-01-01
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead. PMID:22470336
The Major Role of IK1 in Mechanisms of Rotor Drift in the Atria: A Computational Study
Berenfeld, Omer
2016-01-01
Maintenance of paroxysmal atrial fibrillation (AF) by fast rotors in the left atrium (LA) or at the pulmonary veins (PVs) is not fully understood. This review describes the role of the heterogeneous distribution of transmembrane currents in the PVs and LA junction (PV-LAJ) in the localization of rotors in the PVs. Experimentally observed heterogeneities in IK1, IKs, IKr, Ito, and ICaL in the PV-LAJ were incorporated into models of human atrial kinetics to simulate various conditions and investigate rotor drifting mechanisms. Spatial gradients in the currents resulted in shorter action potential duration, less negative minimum diastolic potential, slower upstroke and conduction velocity for rotors in the PV region than in the LA. Rotors under such conditions drifted toward the PV and stabilized at the less excitable region. Our simulations suggest that IK1 heterogeneity is dominant in determining the drift direction through its impact on the excitability gradient. These results provide a novel framework for understanding the complex dynamics of rotors in AF. PMID:28096699
Neu, Scott C; Crawford, Karen L; Toga, Arthur W
2012-01-01
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
NASA Astrophysics Data System (ADS)
Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.
2015-08-01
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S
2015-08-21
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
A distributed scheduling algorithm for heterogeneous real-time systems
NASA Technical Reports Server (NTRS)
Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi
1991-01-01
Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.
Turner, Rebecca M; Jackson, Dan; Wei, Yinghui; Thompson, Simon G; Higgins, Julian P T
2015-01-01
Numerous meta-analyses in healthcare research combine results from only a small number of studies, for which the variance representing between-study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta-analysis. We present two methods for implementing Bayesian meta-analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta-analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta-analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log-normal distributions for the between-study variance, applicable to meta-analyses of binary outcomes on the log odds-ratio scale. The methods are applied to two example meta-analyses, incorporating the relevant predictive distributions as prior distributions for between-study heterogeneity. We have provided resources to facilitate Bayesian meta-analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:25475839
Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.
Ly, Cheng; Marsat, Gary
2018-02-01
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.
Examining the microtexture evolution in a hole-edge punched into 780 MPa grade hot-rolled steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, J.H.; Kim, M.S.
The deformation behavior in the hole-edge of 780 MPa grade hot-rolled steel during the punching process was investigated via microstructure characterization and computational simulation. Microstructure characterization was conducted to observe the edges of punched holes through the thickness direction, and electron back-scattered diffraction (EBSD) was used to analyze the heterogeneity of the deformation. Finite element analysis (FEA) that could account for a ductile fracture criterion was conducted to simulate the deformation and fracture behaviors of 780 MPa grade hot-rolled steel during the punching process. Calculation of rotation rate fields at the edges of the punched holes during the punching processmore » revealed that metastable orientations in Euler space were confined to specific orientation groups. Rotation-rate fields effectively explained the stability of the initial texture components in the hole-edge region during the punching process. A visco-plastic self-consistent (VPSC) polycrystal model was used to calculate the microtexture evolution in the hole-edge region during the punching process. FEA revealed that the heterogeneous effective strain was closely related to the heterogeneity of the Kernel average misorientation (KAM) distribution in the hole-edge region. A simulation of the deformation microtexture evolution in the hole-edge region using a VPSC model was in good agreement with the experimental results. - Highlights: •We analyzed the microstructure in a hole-edge punched in HR 780HB steel. •Rotation rate fields revealed the stability of the initial texture components. •Heterogeneous effective stain was closely related to the KAM distribution. •VPSC model successfully simulated the deformation microtexture evolution.« less
Schneider, Frank; Bludau, Frederic; Clausen, Sven; Fleckenstein, Jens; Obertacke, Udo; Wenz, Frederik
2017-05-01
To the present date, IORT has been eye and hand guided without treatment planning and tissue heterogeneity correction. This limits the precision of the application and the precise documentation of the location and the deposited dose in the tissue. Here we present a set-up where we use image guidance by intraoperative cone beam computed tomography (CBCT) for precise online Monte Carlo treatment planning including tissue heterogeneity correction. An IORT was performed during balloon kyphoplasty using a dedicated Needle Applicator. An intraoperative CBCT was registered with a pre-op CT. Treatment planning was performed in Radiance using a hybrid Monte Carlo algorithm simulating dose in homogeneous (MCwater) and heterogeneous medium (MChet). Dose distributions on CBCT and pre-op CT were compared with each other. Spinal cord and the metastasis doses were evaluated. The MCwater calculations showed a spherical dose distribution as expected. The minimum target dose for the MChet simulations on pre-op CT was increased by 40% while the maximum spinal cord dose was decreased by 35%. Due to the artefacts on the CBCT the comparison between MChet simulations on CBCT and pre-op CT showed differences up to 50% in dose. igIORT and online treatment planning improves the accuracy of IORT. However, the current set-up is limited by CT artefacts. Fusing an intraoperative CBCT with a pre-op CT allows the combination of an accurate dose calculation with the knowledge of the correct source/applicator position. This method can be also used for pre-operative treatment planning followed by image guided surgery. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Optimization of over-provisioned clouds
NASA Astrophysics Data System (ADS)
Balashov, N.; Baranov, A.; Korenkov, V.
2016-09-01
The functioning of modern applications in cloud-centers is characterized by a huge variety of computational workloads generated. This causes uneven workload distribution and as a result leads to ineffective utilization of cloud-centers' hardware. The proposed article addresses the possible ways to solve this issue and demonstrates that it is a matter of necessity to optimize cloud-centers' hardware utilization. As one of the possible ways to solve the problem of the inefficient resource utilization in heterogeneous cloud-environments an algorithm of dynamic re-allocation of virtual resources is suggested.
Vicini, P; Bonadonna, R C; Lehtovirta, M; Groop, L C; Cobelli, C
1998-01-01
Distributed models of blood-tissue exchange are widely used to measure kinetic events of various solutes from multiple tracer dilution experiments. Their use requires, however, a careful description of blood flow heterogeneity along the capillary bed. Since they have mostly been applied in animal studies, direct measurement of the heterogeneity distribution was possible, e.g., with the invasive microsphere method. Here we apply distributed modeling to a dual tracer experiment in humans, performed using an intravascular (indocyanine green dye, subject to distribution along the vascular tree and confined to the capillary bed) and an extracellular ([3H]-D-mannitol, tracing passive transcapillary transfer across the capillary membrane in the interstitial fluid) tracer. The goal is to measure relevant parameters of transcapillary exchange in human skeletal muscle. We show that assuming an accurate description of blood flow heterogeneity is crucial for modeling, and in particular that assuming for skeletal muscle the well-studied cardiac muscle blood flow heterogeneity is inappropriate. The same reason prevents the use of the common method of estimating the input function of the distributed model via deconvolution, which assumes a known blood flow heterogeneity, either defined from literature or measured, when possible. We present a novel approach for the estimation of blood flow heterogeneity in each individual from the intravascular tracer data. When this newly estimated blood flow heterogeneity is used, a more satisfactory model fit is obtained and it is possible to reliably measure parameters of capillary membrane permeability-surface product and interstitial fluid volume describing transcapillary transfer in vivo.
Lee, Tae Kyu; Sandison, George A
2003-01-21
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, chi, in the algorithm to be determined in advance of calculation.
The energy-dependent electron loss model: backscattering and application to heterogeneous slab media
NASA Astrophysics Data System (ADS)
Lee, Tae Kyu; Sandison, George A.
2003-01-01
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, χ, in the algorithm to be determined in advance of calculation.
NASA Astrophysics Data System (ADS)
España, Samuel; Paganetti, Harald
2011-07-01
Dose calculation for lung tumors can be challenging due to the low density and the fine structure of the geometry. The latter is not fully considered in the CT image resolution used in treatment planning causing the prediction of a more homogeneous tissue distribution. In proton therapy, this could result in predicting an unrealistically sharp distal dose falloff, i.e. an underestimation of the distal dose falloff degradation. The goal of this work was the quantification of such effects. Two computational phantoms resembling a two-dimensional heterogeneous random lung geometry and a swine lung were considered applying a variety of voxel sizes for dose calculation. Monte Carlo simulations were used to compare the dose distributions predicted with the voxel size typically used for the treatment planning procedure with those expected to be delivered using the finest resolution. The results show, for example, distal falloff position differences of up to 4 mm between planned and expected dose at the 90% level for the heterogeneous random lung (assuming treatment plan on a 2 × 2 × 2.5 mm3 grid). For the swine lung, differences of up to 38 mm were seen when airways are present in the beam path when the treatment plan was done on a 0.8 × 0.8 × 2.4 mm3 grid. The two-dimensional heterogeneous random lung phantom apparently does not describe the impact of the geometry adequately because of the lack of heterogeneities in the axial direction. The differences observed in the swine lung between planned and expected dose are presumably due to the poor axial resolution of the CT images used in clinical routine. In conclusion, when assigning margins for treatment planning for lung cancer, proton range uncertainties due to the heterogeneous lung geometry and CT image resolution need to be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashouf, Shahram; Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, Toronto, Ontario; Fleury, Emmanuelle
Purpose: The inhomogeneity correction factor (ICF) method provides heterogeneity correction for the fast calculation TG43 formalism in seed brachytherapy. This study compared ICF-corrected plans to their standard TG43 counterparts, looking at their capacity to assess inadequate coverage and/or risk of any skin toxicities for patients who received permanent breast seed implant (PBSI). Methods and Materials: Two-month postimplant computed tomography scans and plans of 140 PBSI patients were used to calculate dose distributions by using the TG43 and the ICF methods. Multiple dose-volume histogram (DVH) parameters of clinical target volume (CTV) and skin were extracted and compared for both ICF and TG43more » dose distributions. Short-term (desquamation and erythema) and long-term (telangiectasia) skin toxicity data were available on 125 and 110 of the patients, respectively, at the time of the study. The predictive value of each DVH parameter of skin was evaluated using the area under the receiver operating characteristic (ROC) curve for each toxicity endpoint. Results: Dose-volume histogram parameters of CTV, calculated using the ICF method, showed an overall decrease compared to TG43, whereas those of skin showed an increase, confirming previously reported findings of the impact of heterogeneity with low-energy sources. The ICF methodology enabled us to distinguish patients for whom the CTV V{sub 100} and V{sub 90} are up to 19% lower compared to TG43, which could present a risk of recurrence not detected when heterogeneity are not accounted for. The ICF method also led to an increase in the prediction of desquamation, erythema, and telangiectasia for 91% of skin DVH parameters studied. Conclusions: The ICF methodology has the advantage of distinguishing any inadequate dose coverage of CTV due to breast heterogeneity, which can be missed by TG43. Use of ICF correction also led to an increase in prediction accuracy of skin toxicities in most cases.« less
Mashouf, Shahram; Fleury, Emmanuelle; Lai, Priscilla; Merino, Tomas; Lechtman, Eli; Kiss, Alex; McCann, Claire; Pignol, Jean-Philippe
2016-03-15
The inhomogeneity correction factor (ICF) method provides heterogeneity correction for the fast calculation TG43 formalism in seed brachytherapy. This study compared ICF-corrected plans to their standard TG43 counterparts, looking at their capacity to assess inadequate coverage and/or risk of any skin toxicities for patients who received permanent breast seed implant (PBSI). Two-month postimplant computed tomography scans and plans of 140 PBSI patients were used to calculate dose distributions by using the TG43 and the ICF methods. Multiple dose-volume histogram (DVH) parameters of clinical target volume (CTV) and skin were extracted and compared for both ICF and TG43 dose distributions. Short-term (desquamation and erythema) and long-term (telangiectasia) skin toxicity data were available on 125 and 110 of the patients, respectively, at the time of the study. The predictive value of each DVH parameter of skin was evaluated using the area under the receiver operating characteristic (ROC) curve for each toxicity endpoint. Dose-volume histogram parameters of CTV, calculated using the ICF method, showed an overall decrease compared to TG43, whereas those of skin showed an increase, confirming previously reported findings of the impact of heterogeneity with low-energy sources. The ICF methodology enabled us to distinguish patients for whom the CTV V100 and V90 are up to 19% lower compared to TG43, which could present a risk of recurrence not detected when heterogeneity are not accounted for. The ICF method also led to an increase in the prediction of desquamation, erythema, and telangiectasia for 91% of skin DVH parameters studied. The ICF methodology has the advantage of distinguishing any inadequate dose coverage of CTV due to breast heterogeneity, which can be missed by TG43. Use of ICF correction also led to an increase in prediction accuracy of skin toxicities in most cases. Copyright © 2016 Elsevier Inc. All rights reserved.
Suitability of point kernel dose calculation techniques in brachytherapy treatment planning
Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.
2010-01-01
Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118
Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite
NASA Technical Reports Server (NTRS)
Fisher, D. E.
1985-01-01
Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.
NASA Technical Reports Server (NTRS)
DeBaca, Richard C.; Sarkissian, Edwin; Madatyan, Mariyetta; Shepard, Douglas; Gluck, Scott; Apolinski, Mark; McDuffie, James; Tremblay, Dennis
2006-01-01
TES L1B Subsystem is a computer program that performs several functions for the Tropospheric Emission Spectrometer (TES). The term "L1B" (an abbreviation of "level 1B"), refers to data, specific to the TES, on radiometric calibrated spectral radiances and their corresponding noise equivalent spectral radiances (NESRs), plus ancillary geolocation, quality, and engineering data. The functions performed by TES L1B Subsystem include shear analysis, monitoring of signal levels, detection of ice build-up, and phase correction and radiometric and spectral calibration of TES target data. Also, the program computes NESRs for target spectra, writes scientific TES level-1B data to hierarchical- data-format (HDF) files for public distribution, computes brightness temperatures, and quantifies interpixel signal variability for the purpose of first-order cloud and heterogeneous land screening by the level-2 software summarized in the immediately following article. This program uses an in-house-developed algorithm, called "NUSRT," to correct instrument line-shape factors.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
Implementing Parquet equations using HPX
NASA Astrophysics Data System (ADS)
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
On salesmen and tourists: Two-step optimization in deterministic foragers
NASA Astrophysics Data System (ADS)
Maya, Miguel; Miramontes, Octavio; Boyer, Denis
2017-02-01
We explore a two-step optimization problem in random environments, the so-called restaurant-coffee shop problem, where a walker aims at visiting the nearest and better restaurant in an area and then move to the nearest and better coffee-shop. This is an extension of the Tourist Problem, a one-step optimization dynamics that can be viewed as a deterministic walk in a random medium. A certain amount of heterogeneity in the values of the resources to be visited causes the emergence of power-laws distributions for the steps performed by the walker, similarly to a Lévy flight. The fluctuations of the step lengths tend to decrease as a consequence of multiple-step planning, thus reducing the foraging uncertainty. We find that the first and second steps of each planned movement play very different roles in heterogeneous environments. The two-step process improves only slightly the foraging efficiency compared to the one-step optimization, at a much higher computational cost. We discuss the implications of these findings for animal and human mobility, in particular in relation to the computational effort that informed agents should deploy to solve search problems.
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
A system for distributed intrusion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snapp, S.R.; Brentano, J.; Dias, G.V.
1991-01-01
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less
Distributed parameterization of complex terrain
NASA Astrophysics Data System (ADS)
Band, Lawrence E.
1991-03-01
This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.
Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S
2018-09-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Community-driven computational biology with Debian Linux.
Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles
2010-12-21
The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.
Orchestrating Distributed Resource Ensembles for Petascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldin, Ilya; Mandal, Anirban; Ruth, Paul
2014-04-24
Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Trust Model to Enhance Security and Interoperability of Cloud Environment
NASA Astrophysics Data System (ADS)
Li, Wenjuan; Ping, Lingdi
Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.P.; Bangs, A.L.; Butler, P.L.
Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. Themore » design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.« less
Average is Boring: How Similarity Kills a Meme's Success
Coscia, Michele
2014-01-01
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success. PMID:25257730
NASA Astrophysics Data System (ADS)
Urata, Yumi; Kuge, Keiko; Kase, Yuko
2008-11-01
To understand role of fluid on earthquake rupture processes, we investigated effects of thermal pressurization on spatial variation of dynamic rupture by computing spontaneous rupture propagation on a rectangular fault. We found thermal pressurization can cause heterogeneity of rupture even on a fault of uniform properties. On drained faults, tractions drop linearly with increasing slip in the same way everywhere. However, by changing the drained condition to an undrained one, the slip-weakening curves become non-linear and depend on locations on faults with small shear zone thickness w, and the dynamic frictional stresses vary spatially and temporally. Consequently, the super-shear transition fault length decreases for small w, and the final slip distribution can have some peaks regardless of w, especially on undrained faults. These effects should be taken into account of determining dynamic rupture parameters and modeling earthquake cycles when the presence of fluid is suggested in the source regions.
NASA Astrophysics Data System (ADS)
Ruthven, R. C.; Ketcham, R. A.; Kelly, E. D.
2015-12-01
Three-dimensional textural analysis of garnet porphyroblasts and electron microprobe analyses can, in concert, be used to pose novel tests that challenge and ultimately increase our understanding of metamorphic crystallization mechanisms. Statistical analysis of high-resolution X-ray computed tomography (CT) data of garnet porphyroblasts tells us the degree of ordering or randomness of garnets, which can be used to distinguish the rate-limiting factors behind their nucleation and growth. Electron microprobe data for cores, rims, and core-to-rim traverses are used as proxies to ascertain porphyroblast nucleation and growth rates, and the evolution of sample composition during crystallization. MnO concentrations in garnet cores serve as a proxy for the relative timing of nucleation, and rim concentrations test the hypothesis that MnO is in equilibrium sample-wide during the final stages of crystallization, and that concentrations have not been greatly altered by intracrystalline diffusion. Crystal size distributions combined with compositional data can be used to quantify the evolution of nucleation rates and sample composition during crystallization. This study focuses on quartzite schists from the Picuris Mountains with heterogeneous garnet distributions consisting of dense and sparse layers. 3D data shows that the sparse layers have smaller, less euhedral garnets, and petrographic observations show that sparse layers have more quartz and less mica than dense layers. Previous studies on rocks with homogeneously distributed garnet have shown that crystallization rates are diffusion-controlled, meaning that they are limited by diffusion of nutrients to growth and nucleation sites. This research extends this analysis to heterogeneous rocks to determine nucleation and growth rates, and test the assumption of rock-wide equilibrium for some major elements, among a set of compositionally distinct domains evolving in mm- to cm-scale proximity under identical P-T conditions.
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Kharche, Sanjay R.; So, Aaron; Salerno, Fabio; Lee, Ting-Yim; Ellis, Chris; Goldman, Daniel; McIntyre, Christopher W.
2018-01-01
Dialysis prolongs life but augments cardiovascular mortality. Imaging data suggests that dialysis increases myocardial blood flow (BF) heterogeneity, but its causes remain poorly understood. A biophysical model of human coronary vasculature was used to explain the imaging observations, and highlight causes of coronary BF heterogeneity. Post-dialysis CT images from patients under control, pharmacological stress (adenosine), therapy (cooled dialysate), and adenosine and cooled dialysate conditions were obtained. The data presented disparate phenotypes. To dissect vascular mechanisms, a 3D human vasculature model based on known experimental coronary morphometry and a space filling algorithm was implemented. Steady state simulations were performed to investigate the effects of altered aortic pressure and blood vessel diameters on myocardial BF heterogeneity. Imaging showed that stress and therapy potentially increased mean and total BF, while reducing heterogeneity. BF histograms of one patient showed multi-modality. Using the model, it was found that total coronary BF increased as coronary perfusion pressure was increased. BF heterogeneity was differentially affected by large or small vessel blocking. BF heterogeneity was found to be inversely related to small blood vessel diameters. Simulation of large artery stenosis indicates that BF became heterogeneous (increase relative dispersion) and gave multi-modal histograms. The total transmural BF as well as transmural BF heterogeneity reduced due to large artery stenosis, generating large patches of very low BF regions downstream. Blocking of arteries at various orders showed that blocking larger arteries results in multi-modal BF histograms and large patches of low BF, whereas smaller artery blocking results in augmented relative dispersion and fractal dimension. Transmural heterogeneity was also affected. Finally, the effects of augmented aortic pressure in the presence of blood vessel blocking shows differential effects on BF heterogeneity as well as transmural BF. Improved aortic blood pressure may improve total BF. Stress and therapy may be effective if they dilate small vessels. A potential cause for the observed complex BF distributions (multi-modal BF histograms) may indicate existing large vessel stenosis. The intuitive BF heterogeneity methods used can be readily used in clinical studies. Further development of the model and methods will permit personalized assessment of patient BF status. PMID:29867555
Tsunoda, Tomonori; Kachi, Naoki; Suzuki, Jun-Ichirou
2014-01-01
We examined how the volume and temporal heterogeneity of water supply changed the vertical distribution and mortality of a belowground herbivore, and consequently affected plant biomass. Plantago lanceolata (Plantaginaceae) seedlings were grown at one per pot under different combinations of water volume (large or small volume) and heterogeneity (homogeneous water conditions, watered every day; heterogeneous conditions, watered every 4 days) in the presence or absence of a larva of the belowground herbivorous insect, Anomala cuprea (Coleoptera: Scarabaeidae). The larva was confined in different vertical distributions to top feeding zone (top treatment), middle feeding zone (middle treatment), or bottom feeding zone (bottom treatment); alternatively no larva was introduced (control treatment) or larval movement was not confined (free treatment). Three-way interaction between water volume, heterogeneity, and the herbivore significantly affected plant biomass. With a large water volume, plant biomass was lower in free treatment than in control treatment regardless of heterogeneity. Plant biomass in free treatment was as low as in top treatment. With a small water volume and in free treatment, plant biomass was low (similar to that under top treatment) under homogeneous water conditions but high under heterogeneous ones (similar to that under middle or bottom treatment). Therefore, there was little effect of belowground herbivory on plant growth under heterogeneous water conditions. In other watering regimes, herbivores would be distributed in the shallow soil and reduced root biomass. Herbivore mortality was high with homogeneous application of a large volume or heterogeneous application of a small water volume. Under the large water volume, plant biomass was high in pots in which the herbivore had died. Thus, the combinations of water volume and heterogeneity affected plant growth via the change of a belowground herbivore.
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
Paoletti, Claudia; Esbensen, Kim H
2015-01-01
Material heterogeneity influences the effectiveness of sampling procedures. Most sampling guidelines used for assessment of food and/or feed commodities are based on classical statistical distribution requirements, the normal, binomial, and Poisson distributions-and almost universally rely on the assumption of randomness. However, this is unrealistic. The scientific food and feed community recognizes a strong preponderance of non random distribution within commodity lots, which should be a more realistic prerequisite for definition of effective sampling protocols. Nevertheless, these heterogeneity issues are overlooked as the prime focus is often placed only on financial, time, equipment, and personnel constraints instead of mandating acquisition of documented representative samples under realistic heterogeneity conditions. This study shows how the principles promulgated in the Theory of Sampling (TOS) and practically tested over 60 years provide an effective framework for dealing with the complete set of adverse aspects of both compositional and distributional heterogeneity (material sampling errors), as well as with the errors incurred by the sampling process itself. The results of an empirical European Union study on genetically modified soybean heterogeneity, Kernel Lot Distribution Assessment are summarized, as they have a strong bearing on the issue of proper sampling protocol development. TOS principles apply universally in the food and feed realm and must therefore be considered the only basis for development of valid sampling protocols free from distributional constraints.
NASA Astrophysics Data System (ADS)
Zhu, J.; Winter, C. L.; Wang, Z.
2015-08-01
Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River Basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow-paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW simulation environment, and the PEST tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop log-normally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow-paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.
Spagnolo, Daniel M.; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M.; Lezon, Timothy R.; Gough, Albert; Meyer, Dan E.; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V.; Taylor, D. Lansing; Chennubhotla, S. Chakra
2016-01-01
Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression. PMID:27994939
ADAPTIVE-GRID SIMULATION OF GROUNDWATER FLOW IN HETEROGENEOUS AQUIFERS. (R825689C068)
The prediction of contaminant transport in porous media requires the computation of the flow velocity. This work presents a methodology for high-accuracy computation of flow in a heterogeneous isotropic formation, employing a dual-flow formulation and adaptive...
Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo
2003-01-01
Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.
A Distributed Prognostic Health Management Architecture
NASA Technical Reports Server (NTRS)
Bhaskar, Saha; Saha, Sankalita; Goebel, Kai
2009-01-01
This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.
Zhou, Lian; Zhu, Shanan
2014-01-01
Magnetoacoustic tomography with Magnetic Induction (MAT-MI) is a noninvasive electrical conductivity imaging approach that measures ultrasound wave induced by magnetic stimulation, for reconstructing the distribution of electrical impedance in biological tissue. Existing reconstruction algorithms for MAT-MI are based on the assumption that the acoustic properties in the tissue are homogeneous. However, the tissue in most parts of human body, has heterogeneous acoustic properties, which leads to potential distortion and blurring of small buried objects in the impedance images. In the present study, we proposed a new algorithm for MAT-MI to image the impedance distribution in tissues with inhomogeneous acoustic speed distributions. With a computer head model constructed from MR images of a human subject, a series of numerical simulation experiments were conducted. The present results indicate that the inhomogeneous acoustic properties of tissues in terms of speed variation can be incorporated in MAT-MI imaging. PMID:24845284
Connecting micro dynamics and population distributions in system dynamics models
Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa
2014-01-01
Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model. PMID:25620842
Maintenance of ventricular fibrillation in heterogeneous ventricle.
Arevalo, Hamenegild J; Trayanova, Natalia A
2006-01-01
Although ventricular fibrillation (VF) is the prevalent cause of sudden cardiac death, the mechanisms that underlie VF remain elusive. One possible explanation is that VF is driven by a single robust rotor that is the source of wavefronts that break-up due to functional heterogeneities. Previous 2D computer simulations have proposed that a heterogeneity in background potassium current (IK1) can serve as the substrate for the formation of mother rotor activity. This study incorporates IK1 heterogeneity between the left and right ventricle in a realistic 3D rabbit ventricle model to examine its effects on the organization of VF. Computer simulations show that the IK1 heterogeneity contributes to the initiation and maintenance of VF by providing regions of different refractoriness which serves as sites of wave break and rotor formation. A single rotor that drives the fibrillatory activity in the ventricle is not found in this study. Instead, multiple sites of reentry are recorded throughout the ventricle. Calculation of dominant frequencies for each myocardial node yields no significant difference between the dominant frequency of the LV and the RV. The 3D computer simulations suggest that IK1 spatial heterogeneity alone can not lead to the formation of a stable rotor.
NASA Astrophysics Data System (ADS)
Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.
2017-06-01
Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
Heterogeneity of D-Serine Distribution in the Human Central Nervous System
Suzuki, Masataka; Imanishi, Nobuaki; Mita, Masashi; Hamase, Kenji; Aiso, Sadakazu; Sasabe, Jumpei
2017-01-01
D-serine is an endogenous ligand for N-methyl-D-aspartate glutamate receptors. Accumulating evidence including genetic associations of D-serine metabolism with neurological or psychiatric diseases suggest that D-serine is crucial in human neurophysiology. However, distribution and regulation of D-serine in humans are not well understood. Here, we found that D-serine is heterogeneously distributed in the human central nervous system (CNS). The cerebrum contains the highest level of D-serine among the areas in the CNS. There is heterogeneity in its distribution in the cerebrum and even within the cerebral neocortex. The neocortical heterogeneity is associated with Brodmann or functional areas but is unrelated to basic patterns of cortical layer structure or regional expressional variation of metabolic enzymes for D-serine. Such D-serine distribution may reflect functional diversity of glutamatergic neurons in the human CNS, which may serve as a basis for clinical and pharmacological studies on D-serine modulation. PMID:28604057
Interconnecting heterogeneous database management systems
NASA Technical Reports Server (NTRS)
Gligor, V. D.; Luckenbaugh, G. L.
1984-01-01
It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
Zhong, Qing; Rüschoff, Jan H.; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J.; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J.; Rupp, Niels J.; Fankhauser, Christian; Buhmann, Joachim M.; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A.; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C.; Jochum, Wolfram; Wild, Peter J.
2016-01-01
Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility. PMID:27052161
Zhong, Qing; Rüschoff, Jan H; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J; Rupp, Niels J; Fankhauser, Christian; Buhmann, Joachim M; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C; Jochum, Wolfram; Wild, Peter J
2016-04-07
Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.
2018-01-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B
2017-06-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.
Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities
NASA Astrophysics Data System (ADS)
Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.
2012-03-01
A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.
Liu, Gang; Mac Gabhann, Feilim; Popel, Aleksander S.
2012-01-01
The process of oxygen delivery from capillary to muscle fiber is essential for a tissue with variable oxygen demand, such as skeletal muscle. Oxygen distribution in exercising skeletal muscle is regulated by convective oxygen transport in the blood vessels, oxygen diffusion and consumption in the tissue. Spatial heterogeneities in oxygen supply, such as microvascular architecture and hemodynamic variables, had been observed experimentally and their marked effects on oxygen exchange had been confirmed using mathematical models. In this study, we investigate the effects of heterogeneities in oxygen demand on tissue oxygenation distribution using a multiscale oxygen transport model. Muscles are composed of different ratios of the various fiber types. Each fiber type has characteristic values of several parameters, including fiber size, oxygen consumption, myoglobin concentration, and oxygen diffusivity. Using experimentally measured parameters for different fiber types and applying them to the rat extensor digitorum longus muscle, we evaluated the effects of heterogeneous fiber size and fiber type properties on the oxygen distribution profile. Our simulation results suggest a marked increase in spatial heterogeneity of oxygen due to fiber size distribution in a mixed muscle. Our simulations also suggest that the combined effects of fiber type properties, except size, do not contribute significantly to the tissue oxygen spatial heterogeneity. However, the incorporation of the difference in oxygen consumption rates of different fiber types alone causes higher oxygen heterogeneity compared to control cases with uniform fiber properties. In contrast, incorporating variation in other fiber type-specific properties, such as myoglobin concentration, causes little change in spatial tissue oxygenation profiles. PMID:23028531
Mathematical models of tumor heterogeneity and drug resistance
NASA Astrophysics Data System (ADS)
Greene, James
In this dissertation we develop mathematical models of tumor heterogeneity and drug resistance in cancer chemotherapy. Resistance to chemotherapy is one of the major causes of the failure of cancer treatment. Furthermore, recent experimental evidence suggests that drug resistance is a complex biological phenomena, with many influences that interact nonlinearly. Here we study the influence of such heterogeneity on treatment outcomes, both in general frameworks and under specific mechanisms. We begin by developing a mathematical framework for describing multi-drug resistance to cancer. Heterogeneity is reflected by a continuous parameter, which can either describe a single resistance mechanism (such as the expression of P-gp in the cellular membrane) or can account for the cumulative effect of several mechanisms and factors. The model is written as a system of integro-differential equations, structured by the continuous "trait," and includes density effects as well as mutations. We study the limiting behavior of the model, both analytically and numerically, and apply it to study treatment protocols. We next study a specific mechanism of tumor heterogeneity and its influence on cell growth: the cell-cycle. We derive two novel mathematical models, a stochastic agent-based model and an integro-differential equation model, each of which describes the growth of cancer cells as a dynamic transition between proliferative and quiescent states. By examining the role all parameters play in the evolution of intrinsic tumor heterogeneity, and the sensitivity of the population growth to parameter values, we show that the cell-cycle length has the most significant effect on the growth dynamics. In addition, we demonstrate that the agent-based model can be approximated well by the more computationally efficient integro-differential equations, when the number of cells is large. The model is closely tied to experimental data of cell growth, and includes a novel implementation of transition rates as a function of global density. Finally, we extend the model of cell-cycle heterogeneity to include spatial variables. Cells are modeled as soft spheres and exhibit attraction/repulsion/random forces. A fundamental hypothesis is that cell-cycle length increases with local density, thus producing a distribution of observed division lengths. Apoptosis occurs primarily through an extended period of unsuccessful proliferation, and the explicit mechanism of the drug (Paclitaxel) is modeled as an increase in cell-cycle duration. We show that the distribution of cell-cycle lengths is highly time-dependent, with close time-averaged agreement with the distribution used in the previous work. Furthermore, survival curves are calculated and shown to qualitatively agree with experimental data in different densities and geometries, thus relating the cellular microenvironment to drug resistance.
NASA Astrophysics Data System (ADS)
Kuge, K.; Kase, Y.; Urata, Y.; Campos, J.; Perez, A.
2008-12-01
The physical mechanism of intermediate-depth earthquakes remains unsolved, and dehydration embrittlement in subducting plates is a candidate. An earthquake of Mw7.8 occurred at a depth of 115 km beneath Tarapaca, Chile. In this study, we suggest that the earthquake rupture can be attributed to heterogeneous fluid distribution across the subducting plate. The distribution of aftershocks suggests that the earthquake occurred on the subhorizontal fault plane. By modeling regional waveforms, we determined the spatiotemporal distribution of moment release on the fault plane, testing a different suite of velocity models and hypocenters. Two patches of high slip were robustly obtained, although their geometry tends to vary. We tested the results separately by computing the synthetic teleseismic P and pP waveforms. Observed P waveforms are generally modeled, whereas two pulses of observed pP require that the two patches are in the WNW-ESE direction. From the selected moment-release evolution, the dynamic rupture model was constructed by means of Mikumo et al. (1998). The model shows two patches of high dynamic stress drop. Notable is a region of negative stress drop between the two patches. This was required so that the region could lack wave radiation but propagate rupture from the first to the second patches. We found from teleseismic P that the radiation efficiency of the earthquake is relatively small, which can support the existence of negative stress drop during the rupture. The heterogeneous distribution of stress drop that we found can be caused by fluid. The T-P condition of dehydration explains the locations of double seismic zones (e.g. Hacker et al., 2003). The distance between the two patches of high stress drop agrees with the distance between the upper and lower layers of the double seismic zone observed in the south (Rietbrock and Waldhauser, 2004). The two patches can be parts of the double seismic zone, indicating the existence of fluid from dehydration, whereas the region of negative stress drop is in the absence of fluid. In the background environment of negative stress drop, fluid can change the negative stress drop to positive, due to pore pressure variation (e.g. thermal pressurization).
Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.
Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo
2017-09-01
The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.
Chemical and seismological constraints on mantle heterogeneity.
Helffrich, George
2002-11-15
Recent seismological studies that use scattered waves to detect heterogeneities in the mantle reveal the presence of a small, distributed elastic heterogeneity in the lower mantle which does not appear to be thermal in nature. The characteristic size of these heterogeneities appears to be ca. 8 km, suggesting that they represent subducted recycled oceanic crust. With this stimulus, old ideas that the mantle is heterogeneous in structure, rather than stratified, are reinterpreted and a simple, end-member model for the heterogeneity structure is proposed. The volumetrically largest components in the model are recycled oceanic crust, which contains the heat-producing elements, and mantle depleted of these and other incompatible trace elements. About 10% of the mantle's mass is made up of recycled oceanic crust, which is associated with the observed small-scale seismic heterogeneity. The way this heterogeneity is distributed is in convectively stretched and thinned bodies ranging downwards in size from 8 km. With the present techniques to detect small bodies through scattering, only ca. 55% of the mantle's small-scale heterogeneities are detectable seismically.
An Experimental Framework for Executing Applications in Dynamic Grid Environments
NASA Technical Reports Server (NTRS)
Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T
2015-01-01
Estimation of between-study heterogeneity is problematic in small meta-analyses. Bayesian meta-analysis is beneficial because it allows incorporation of external evidence on heterogeneity. To facilitate this, we provide empirical evidence on the likely heterogeneity between studies in meta-analyses relating to specific research settings. Our analyses included 6,492 continuous-outcome meta-analyses within the Cochrane Database of Systematic Reviews. We investigated the influence of meta-analysis settings on heterogeneity by modeling study data from all meta-analyses on the standardized mean difference scale. Meta-analysis setting was described according to outcome type, intervention comparison type, and medical area. Predictive distributions for between-study variance expected in future meta-analyses were obtained, which can be used directly as informative priors. Among outcome types, heterogeneity was found to be lowest in meta-analyses of obstetric outcomes. Among intervention comparison types, heterogeneity was lowest in meta-analyses comparing two pharmacologic interventions. Predictive distributions are reported for different settings. In two example meta-analyses, incorporating external evidence led to a more precise heterogeneity estimate. Heterogeneity was influenced by meta-analysis characteristics. Informative priors for between-study variance were derived for each specific setting. Our analyses thus assist the incorporation of realistic prior information into meta-analyses including few studies. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Heterogeneous Distribution of Chromium on Mercury
NASA Astrophysics Data System (ADS)
Nittler, L. R.; Boujibar, A.; Crapster-Pregont, E.; Frank, E. A.; McCoy, T. J.; McCubbin, F. M.; Starr, R. D.; Vander Kaaden, K. E.; Vorburger, A.; Weider, S. Z.
2018-05-01
Mercury's surface has an average Cr/Si ratio of 0.003 (Cr 800 ppm), with at least a factor of 2 systematic uncertainty. Cr is heterogeneously distributed and correlated with Mg, Ca, S, and Fe and anti-correlated with Al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Imad, E-mail: iali@ouhsc.edu; Ahmad, Salahuddin
2013-10-01
To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatmentmore » sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.« less
Hadad, K; Zohrevand, M; Faghihi, R; Sedighi Pashaki, A
2015-03-01
HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry.
NASA Astrophysics Data System (ADS)
Gonzales, H. B.; Ravi, S.; Li, J. J.; Sankey, J. B.
2016-12-01
Hydrological and aeolian processes control the redistribution of soil and nutrients in arid and semi arid environments thereby contributing to the formation of heterogeneous patchy landscapes with nutrient-rich resource islands surrounded by nutrient depleted bare soil patches. The differential trapping of soil particles by vegetation canopies may result in textural changes beneath the vegetation, which, in turn, can alter the hydrological processes such as infiltration and runoff. We conducted infiltration experiments and soil grain size analysis of several shrub (Larrea tridentate) and grass (Bouteloua eriopoda) microsites and in a heterogeneous landscape in the Chihuahuan desert (New Mexico, USA). Our results indicate heterogeneity in soil texture and infiltration patterns under grass and shrub microsites. We assessed the trapping effectiveness of vegetation canopies using a novel computational fluid dynamics (CFD) approach. An open-source software (OpenFOAM) was used to validate the data gathered from particle size distribution (PSD) analysis of soil within the shrub and grass microsites and their porosities (91% for shrub and 68% for grass) determined using terrestrial LiDAR surveys. Three-dimensional architectures of the shrub and grass were created using an open-source computer-aided design (CAD) software (Blender). The readily available solvers within the OpenFOAM architecture were modified to test the validity and optimize input parameters in assessing trapping efficiencies of sparse vegetation against aeolian sediment flux. The results from the numerical simulations explained the observed textual changes under grass and shrub canopies and highlighted the role of sediment trapping by canopies in structuring patch-scale hydrological processes.
Hadad, K.; Zohrevand, M.; Faghihi, R.; Sedighi Pashaki, A.
2015-01-01
Background HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. Materials and Methods In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. Results The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Conclusion Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry. PMID:25973408
Norton, Kerri-Ann; Jin, Kideok; Popel, Aleksander S
2018-05-08
A hallmark of breast tumors is its spatial heterogeneity that includes its distribution of cancer stem cells and progenitor cells, but also heterogeneity in the tumor microenvironment. In this study we focus on the contributions of stromal cells, specifically macrophages, fibroblasts, and endothelial cells on tumor progression. We develop a computational model of triple-negative breast cancer based on our previous work and expand it to include macrophage infiltration, fibroblasts, and angiogenesis. In vitro studies have shown that the secretomes of tumor-educated macrophages and fibroblasts increase both the migration and proliferation rates of triple-negative breast cancer cells. In vivo studies also demonstrated that blocking signaling of selected secreted factors inhibits tumor growth and metastasis in mouse xenograft models. We investigate the influences of increased migration and proliferation rates on tumor growth, the effect of the presence on fibroblasts or macrophages on growth and morphology, and the contributions of macrophage infiltration on tumor growth. We find that while the presence of macrophages increases overall tumor growth, the increase in macrophage infiltration does not substantially increase tumor growth and can even stifle tumor growth at excessive rates. Copyright © 2018. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Hydrologic and geochemical data assimilation at the Hanford 300 Area
NASA Astrophysics Data System (ADS)
Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.
2012-12-01
In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
Liu, Huolong; Li, Mingzhong
2014-11-20
In this work a two-compartmental population balance model (TCPBM) was proposed to model a pulsed top-spray fluidized bed granulation. The proposed TCPBM considered the spatially heterogeneous granulation mechanisms of the granule growth by dividing the granulator into two perfectly mixed zones of the wetting compartment and drying compartment, in which the aggregation mechanism was assumed in the wetting compartment and the breakage mechanism was considered in the drying compartment. The sizes of the wetting and drying compartments were constant in the TCPBM, in which 30% of the bed was the wetting compartment and 70% of the bed was the drying compartment. The exchange rate of particles between the wetting and drying compartments was determined by the details of the flow properties and distribution of particles predicted by the computational fluid dynamics (CFD) simulation. The experimental validation has shown that the proposed TCPBM can predict evolution of the granule size and distribution within the granulator under different binder spray operating conditions accurately. Copyright © 2014 Elsevier B.V. All rights reserved.
Integration science and distributed networks
NASA Astrophysics Data System (ADS)
Landauer, Christopher; Bellman, Kirstie L.
2002-07-01
Our work on integration of data and knowledge sources is based in a common theoretical treatment of 'Integration Science', which leads to systematic processes for combining formal logical and mathematical systems, computational and physical systems, and human systems and organizations. The theory is based on the processing of explicit meta-knowledge about the roles played by the different knowledge sources and the methods of analysis and semantic implications of the different data values, together with information about the context in which and the purpose for which they are being combined. The research treatment is primarily mathematical, and though this kind of integration mathematics is still under development, there are some applicable common threads that have emerged already. Instead of describing the current state of the mathematical investigations, since they are not yet crystallized enough for formalisms, we describe our applications of the approach in several different areas, including our focus area of 'Constructed Complex Systems', which are complex heterogeneous systems managed or mediated by computing systems. In this context, it is important to remember that all systems are embedded, all systems are autonomous, and that all systems are distributed networks.
Modeling Invasion Dynamics with Spatial Random-Fitness Due to Micro-Environment
Manem, V. S. K.; Kaveh, K.; Kohandel, M.; Sivaloganathan, S.
2015-01-01
Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids (in particular, square lattices) under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics. PMID:26509572
Interstitial Fluid Flow and Drug Delivery in Vascularized Tumors: A Computational Model
Welter, Michael; Rieger, Heiko
2013-01-01
Interstitial fluid is a solution that bathes and surrounds the human cells and provides them with nutrients and a way of waste removal. It is generally believed that elevated tumor interstitial fluid pressure (IFP) is partly responsible for the poor penetration and distribution of therapeutic agents in solid tumors, but the complex interplay of extravasation, permeabilities, vascular heterogeneities and diffusive and convective drug transport remains poorly understood. Here we consider–with the help of a theoretical model–the tumor IFP, interstitial fluid flow (IFF) and its impact upon drug delivery within tumor depending on biophysical determinants such as vessel network morphology, permeabilities and diffusive vs. convective transport. We developed a vascular tumor growth model, including vessel co-option, regression, and angiogenesis, that we extend here by the interstitium (represented by a porous medium obeying Darcy's law) and sources (vessels) and sinks (lymphatics) for IFF. With it we compute the spatial variation of the IFP and IFF and determine its correlation with the vascular network morphology and physiological parameters like vessel wall permeability, tissue conductivity, distribution of lymphatics etc. We find that an increased vascular wall conductivity together with a reduction of lymph function leads to increased tumor IFP, but also that the latter does not necessarily imply a decreased extravasation rate: Generally the IF flow rate is positively correlated with the various conductivities in the system. The IFF field is then used to determine the drug distribution after an injection via a convection diffusion reaction equation for intra- and extracellular concentrations with parameters guided by experimental data for the drug Doxorubicin. We observe that the interplay of convective and diffusive drug transport can lead to quite unexpected effects in the presence of a heterogeneous, compartmentalized vasculature. Finally we discuss various strategies to increase drug exposure time of tumor cells. PMID:23940570
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Particle seeding enhances interconnectivity in polymeric scaffolds foamed using supercritical CO(2).
Collins, Niki J; Bridson, Rachel H; Leeke, Gary A; Grover, Liam M
2010-03-01
Foaming using supercritical CO(2) is a well-known process for the production of polymeric scaffolds for tissue engineering. However, this method typically leads to scaffolds with low pore interconnectivity, resulting in insufficient mass transport and a heterogeneous distribution of cells. In this study, microparticulate silica was added to the polymer during processing and the effects of this particulate seeding on the interconnectivity of the pore structure and pore size distribution were investigated. Scaffolds comprising polylactide and a range of silica contents (0-50 wt.%) were produced by foaming with supercritical CO(2). Scaffold structure, pore size distributions and interconnectivity were assessed using X-ray computed microtomography. Interconnectivity was also determined through physical measurements. It was found that incorporation of increasing quantities of silica particles increased the interconnectivity of the scaffold pore structure. The pore size distribution was also reduced through the addition of silica, while total porosity was found to be largely independent of silica content. Physical measurements and those derived from X-ray computed microtomography were comparable. The conclusion drawn was that the architecture of foamed polymeric scaffolds can be advantageously manipulated through the incorporation of silica microparticles. The findings of this study further establish supercritical fluid foaming as an important tool in scaffold production and show how a previous limitation can be overcome. Copyright 2009 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.
2017-12-01
DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.; Yu, G.; Wang, K.
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moignier, C; Huet, C; Barraux, V
Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less
Impacts of Streambed Heterogeneity and Anisotropy on Residence Time of Hyporheic Zone.
Liu, Suning; Chui, Ting Fong May
2018-05-01
The hyporheic zone (HZ), which is the region beneath or alongside a streambed, plays an important role in the stream's ecology. The duration that a water molecule or a solute remains within the HZ, or residence time (RT), is one of the most common metrics used to evaluate the function of the HZ. The RT is greatly influenced by the streambed's hydraulic conductivity (K), which is intrinsically difficult to characterize due to its heterogeneity and anisotropy. Many laboratory and numerical studies of the HZ have simplified the streambed K to a constant, thus producing RT values that may differ from those gathered from the field. Some studies have considered the heterogeneity of the HZ, but very few have accounted for anisotropy or the natural K distributions typically found in real streambeds. This study developed numerical models in MODFLOW to examine the influence of heterogeneity and anisotropy, and that of the natural K distribution in a streambed, on the RT of the HZ. Heterogeneity and anisotropy were both found to shorten the mean and median RTs while increasing the range of the RTs. Moreover, heterogeneous K fields arranged in a more orderly pattern had longer RTs than those with random K distributions. These results could facilitate the design of streambed K values and distributions to achieve the desired RT during river restoration. They could also assist the translation of results from the more commonly considered homogeneous and/or isotropic conditions into heterogeneous and anisotropic field situations. © 2017, National Ground Water Association.
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
PanDA for ATLAS distributed computing in the next decade
NASA Astrophysics Data System (ADS)
Barreiro Megino, F. H.; De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such as bioinformatics groups successfully running Paleomix (microbial genome and metagenomes) payload on supercomputers. In this paper we will focus on the new and planned features that are most important to the next decade of distributed computing workload management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less
Impact of mechanical heterogeneity on joint density in a welded ignimbrite
NASA Astrophysics Data System (ADS)
Soden, A. M.; Lunn, R. J.; Shipton, Z. K.
2016-08-01
Joints are conduits for groundwater, hydrocarbons and hydrothermal fluids. Robust fluid flow models rely on accurate characterisation of joint networks, in particular joint density. It is generally assumed that the predominant factor controlling joint density in layered stratigraphy is the thickness of the mechanical layer where the joints occur. Mechanical heterogeneity within the layer is considered a lesser influence on joint formation. We analysed the frequency and distribution of joints within a single 12-m thick ignimbrite layer to identify the controls on joint geometry and distribution. The observed joint distribution is not related to the thickness of the ignimbrite layer. Rather, joint initiation, propagation and termination are controlled by the shape, spatial distribution and mechanical properties of fiamme, which are present within the ignimbrite. The observations and analysis presented here demonstrate that models of joint distribution, particularly in thicker layers, that do not fully account for mechanical heterogeneity are likely to underestimate joint density, the spatial variability of joint distribution and the complex joint geometries that result. Consequently, we recommend that characterisation of a layer's compositional and material properties improves predictions of subsurface joint density in rock layers that are mechanically heterogeneous.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Applying a cloud computing approach to storage architectures for spacecraft
NASA Astrophysics Data System (ADS)
Baldor, Sue A.; Quiroz, Carlos; Wood, Paul
As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
NASA Astrophysics Data System (ADS)
Contreras Quintana, S. H.; Werne, J. P.; Brown, E. T.; Halbur, J.; Sinninghe Damsté, , J.; Schouten, S.; Correa-Metrio, A.; Fawcett, P. J.
2014-12-01
Branched glycerol dialkyl glycerol tetraethers (GDGTs) are recently discovered bacterial membrane lipids, ubiquitously present in peat bogs and soils, as well as in rivers, lakes and lake sediments. Their distribution appears to be controlled mainly by soil pH and annual mean air temperature (MAT) and they have been increasingly used as paleoclimate proxies in sedimentary records. In order to validate their application as paleoclimate proxies, it is essential evaluate the influence of small scale environmental variability on their distribution. Initial application of the original soil-based branched GDGT distribution proxy to lacustrine sediments from Valles Caldera, New Mexico (NM) was promising, producing a viable temperature record spanning two glacial/interglacial cycles. In this study, we assess the influence of analytical and spatial soil heterogeneity on the concentration and distribution of 9 branched GDGTs in soils from Valles Caldera, and show how this variability is propagated to MAT and pH estimates using multiple soil-based branched GDGT transfer functions. Our results show that significant differences in the abundance and distribution of branched GDGTs in soil can be observed even within a small area such as Valles Caldera. Although the original MBT-CBT calibration appears to give robust MAT estimates and the newest calibration provides pH estimates in better agreement with modern local soils in Valles Caldera, the environmental heterogeneity (e.g. vegetation type and soil moisture) appears to affect the precision of MAT and pH estimates. Furthermore, the heterogeneity of soils leads to significant variability among samples taken even from within a square meter. While such soil heterogeneity is not unknown (and is typically controlled for by combining multiple samples), this study quantifies heterogeneity relative to branched GDGT-based proxies for the first time, indicating that care must be taken with samples from heterogeneous soils in MAT and pH reconstructions.
NASA Astrophysics Data System (ADS)
Chen, Xingyuan; Murakami, Haruko; Hahn, Melanie S.; Hammond, Glenn E.; Rockhold, Mark L.; Zachara, John M.; Rubin, Yoram
2012-06-01
Tracer tests performed under natural or forced gradient flow conditions can provide useful information for characterizing subsurface properties, through monitoring, modeling, and interpretation of the tracer plume migration in an aquifer. Nonreactive tracer experiments were conducted at the Hanford 300 Area, along with constant-rate injection tests and electromagnetic borehole flowmeter tests. A Bayesian data assimilation technique, the method of anchored distributions (MAD) (Rubin et al., 2010), was applied to assimilate the experimental tracer test data with the other types of data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation.In this study, the Bayesian prior information on the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using constant-rate injection and borehole flowmeter test data. The posterior distribution of the conductivity field was obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. MAD was implemented with the massively parallel three-dimensional flow and transport code PFLOTRAN to cope with the highly transient flow boundary conditions at the site and to meet the computational demands of MAD. A synthetic study proved that the proposed method could effectively invert tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. Application of MAD to actual field tracer data at the Hanford 300 Area demonstrates that inverting for spatial heterogeneity of hydraulic conductivity under transient flow conditions is challenging and more work is needed.
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
Evolutionary dynamics of social dilemmas in structured heterogeneous populations.
Santos, F C; Pacheco, J M; Lenaerts, Tom
2006-02-28
Real populations have been shown to be heterogeneous, in which some individuals have many more contacts than others. This fact contrasts with the traditional homogeneous setting used in studies of evolutionary game dynamics. We incorporate heterogeneity in the population by studying games on graphs, in which the variability in connectivity ranges from single-scale graphs, for which heterogeneity is small and associated degree distributions exhibit a Gaussian tale, to scale-free graphs, for which heterogeneity is large with degree distributions exhibiting a power-law behavior. We study the evolution of cooperation, modeled in terms of the most popular dilemmas of cooperation. We show that, for all dilemmas, increasing heterogeneity favors the emergence of cooperation, such that long-term cooperative behavior easily resists short-term noncooperative behavior. Moreover, we show how cooperation depends on the intricate ties between individuals in scale-free populations.
Pinter-Wollman, Noa; Wollman, Roy; Guetz, Adam; Holmes, Susan; Gordon, Deborah M.
2011-01-01
Social insects exhibit coordinated behaviour without central control. Local interactions among individuals determine their behaviour and regulate the activity of the colony. Harvester ants are recruited for outside work, using networks of brief antennal contacts, in the nest chamber closest to the nest exit: the entrance chamber. Here, we combine empirical observations, image analysis and computer simulations to investigate the structure and function of the interaction network in the entrance chamber. Ant interactions were distributed heterogeneously in the chamber, with an interaction hot-spot at the entrance leading further into the nest. The distribution of the total interactions per ant followed a right-skewed distribution, indicating the presence of highly connected individuals. Numbers of ant encounters observed positively correlated with the duration of observation. Individuals varied in interaction frequency, even after accounting for the duration of observation. An ant's interaction frequency was explained by its path shape and location within the entrance chamber. Computer simulations demonstrate that variation among individuals in connectivity accelerates information flow to an extent equivalent to an increase in the total number of interactions. Individual variation in connectivity, arising from variation among ants in location and spatial behaviour, creates interaction centres, which may expedite information flow. PMID:21490001
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
NASA Astrophysics Data System (ADS)
Olasz, A.; Nguyen Thai, B.; Kristóf, D.
2016-06-01
Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.
NASA Astrophysics Data System (ADS)
Indra, Sandipa; Guchhait, Biswajit; Biswas, Ranjit
2016-03-01
We have performed steady state UV-visible absorption and time-resolved fluorescence measurements and computer simulations to explore the cosolvent mole fraction induced changes in structural and dynamical properties of water/dioxane (Diox) and water/tetrahydrofuran (THF) binary mixtures. Diox is a quadrupolar solvent whereas THF is a dipolar one although both are cyclic molecules and represent cycloethers. The focus here is on whether these cycloethers can induce stiffening and transition of water H-bond network structure and, if they do, whether such structural modification differentiates the chemical nature (dipolar or quadrupolar) of the cosolvent molecules. Composition dependent measured fluorescence lifetimes and rotation times of a dissolved dipolar solute (Coumarin 153, C153) suggest cycloether mole-fraction (XTHF/Diox) induced structural transition for both of these aqueous binary mixtures in the 0.1 ≤ XTHF/Diox ≤ 0.2 regime with no specific dependence on the chemical nature. Interestingly, absorption measurements reveal stiffening of water H-bond structure in the presence of both the cycloethers at a nearly equal mole-fraction, XTHF/Diox ˜ 0.05. Measurements near the critical solution temperature or concentration indicate no role for the solution criticality on the anomalous structural changes. Evidences for cycloether aggregation at very dilute concentrations have been found. Simulated radial distribution functions reflect abrupt changes in respective peak heights at those mixture compositions around which fluorescence measurements revealed structural transition. Simulated water coordination numbers (for a dissolved C153) and number of H-bonds also exhibit minima around these cosolvent concentrations. In addition, several dynamic heterogeneity parameters have been simulated for both the mixtures to explore the effects of structural transition and chemical nature of cosolvent on heterogeneous dynamics of these systems. Simulated four-point dynamic susceptibility suggests formation of clusters inducing local heterogeneity in the solution structure.
Multi-scale Pore Imaging Techniques to Characterise Heterogeneity Effects on Flow in Carbonate Rock
NASA Astrophysics Data System (ADS)
Shah, S. M.
2017-12-01
Digital rock analysis and pore-scale studies have become an essential tool in the oil and gas industry to understand and predict the petrophysical and multiphase flow properties for the assessment and exploitation of hydrocarbon reserves. Carbonate reservoirs, accounting for majority of the world's hydrocarbon reserves, are well known for their heterogeneity and multiscale pore characteristics. The pore sizes in carbonate rock can vary over orders of magnitudes, the geometry and topology parameters of pores at different scales have a great impact on flow properties. A pore-scale study is often comprised of two key procedures: 3D pore-scale imaging and numerical modelling techniques. The fundamental problem in pore-scale imaging and modelling is how to represent and model the different range of scales encountered in porous media, from the pore-scale to macroscopic petrophysical and multiphase flow properties. However, due to the restrictions of image size vs. resolution, the desired detail is rarely captured at the relevant length scales using any single imaging technique. Similarly, direct simulations of transport properties in heterogeneous rocks with broad pore size distributions are prohibitively expensive computationally. In this study, we present the advances and review the practical limitation of different imaging techniques varying from core-scale (1mm) using Medical Computed Tomography (CT) to pore-scale (10nm - 50µm) using Micro-CT, Confocal Laser Scanning Microscopy (CLSM) and Focussed Ion Beam (FIB) to characterise the complex pore structure in Ketton carbonate rock. The effect of pore structure and connectivity on the flow properties is investigated using the obtained pore scale images of Ketton carbonate using Pore Network and Lattice-Boltzmann simulation methods in comparison with experimental data. We also shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging.
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
Community-driven computational biology with Debian Linux
2010-01-01
Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984
The emergence of spatial cyberinfrastructure.
Wright, Dawn J; Wang, Shaowen
2011-04-05
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge.
The emergence of spatial cyberinfrastructure
Wright, Dawn J.; Wang, Shaowen
2011-01-01
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge. PMID:21467227
Absolute comparison of simulated and experimental protein-folding dynamics
NASA Astrophysics Data System (ADS)
Snow, Christopher D.; Nguyen, Houbi; Pande, Vijay S.; Gruebele, Martin
2002-11-01
Protein folding is difficult to simulate with classical molecular dynamics. Secondary structure motifs such as α-helices and β-hairpins can form in 0.1-10µs (ref. 1), whereas small proteins have been shown to fold completely in tens of microseconds. The longest folding simulation to date is a single 1-µs simulation of the villin headpiece; however, such single runs may miss many features of the folding process as it is a heterogeneous reaction involving an ensemble of transition states. Here, we have used a distributed computing implementation to produce tens of thousands of 5-20-ns trajectories (700µs) to simulate mutants of the designed mini-protein BBA5. The fast relaxation dynamics these predict were compared with the results of laser temperature-jump experiments. Our computational predictions are in excellent agreement with the experimentally determined mean folding times and equilibrium constants. The rapid folding of BBA5 is due to the swift formation of secondary structure. The convergence of experimentally and computationally accessible timescales will allow the comparison of absolute quantities characterizing in vitro and in silico (computed) protein folding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
Van Epps, J Scott; Chew, Douglas W; Vorp, David A
2009-10-01
Certain arteries (e.g., coronary, femoral, etc.) are exposed to cyclic flexure due to their tethering to surrounding tissue beds. It is believed that such stimuli result in a spatially variable biomechanical stress distribution, which has been implicated as a key modulator of remodeling associated with atherosclerotic lesion localization. In this study we utilized a combined ex vivo experimental/computational methodology to address the hypothesis that local variations in shear and mural stress associated with cyclic flexure influence the distribution of early markers of atherogenesis. Bilateral porcine femoral arteries were surgically harvested and perfused ex vivo under pulsatile arterial conditions. One of the paired vessels was exposed to cyclic flexure (0-0.7 cm(-1)) at 1 Hz for 12 h. During the last hour, the perfusate was supplemented with Evan's blue dye-labeled albumin. A custom tissue processing protocol was used to determine the spatial distribution of endothelial permeability, apoptosis, and proliferation. Finite element and computational fluid dynamics techniques were used to determine the mural and shear stress distributions, respectively, for each perfused segment. Biological data obtained experimentally and mechanical stress data estimated computationally were combined in an experiment-specific manner using multiple linear regression analyses. Arterial segments exposed to cyclic flexure had significant increases in intimal and medial apoptosis (3.42+/-1.02 fold, p=0.029) with concomitant increases in permeability (1.14+/-0.04 fold, p=0.026). Regression analyses revealed specific mural stress measures including circumferential stress at systole, and longitudinal pulse stress were quantitatively correlated with the distribution of permeability and apoptosis. The results demonstrated that local variation in mechanical stress in arterial segments subjected to cyclic flexure indeed influence the extent and spatial distribution of the early atherogenic markers. In addition, the importance of including mural stresses in the investigation of vascular mechanopathobiology was highlighted. Specific example results were used to describe a potential mechanism by which systemic risk factors can lead to a heterogeneous disease.
NASA Astrophysics Data System (ADS)
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Flexible services for the support of research.
Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John
2013-01-28
Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.
NASA Astrophysics Data System (ADS)
Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.
2016-12-01
In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward
Hyporheic Zone Residence Time Distributions in Regulated River Corridors
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Shuai, P.; Gomez-Velez, J. D.; Ren, H.; Hammond, G. E.
2017-12-01
Regulated rivers exhibit stage fluctuations at multiple frequencies due to both natural processes (e.g., seasonal cycle) and anthropogenic activities (e.g., dam operation). The interaction between the dynamic river flow conditions and the heterogeneous aquifer properties results in complex hydrologic exchange pathways that are ubiquitous in free-flowing and regulated river corridors. The dynamic nature of the exchange flow is reflected in the residence time distribution (RTD) of river water within the groundwater system, which is a key metric that links river corridor biogeochemical processes with the hydrologic exchange. Understanding the dynamics of RTDs is critical to gain the mechanistic understanding of hydrologic exchange fluxes and propose new parsimonious models for river corridors, yet it is understudied primarily due to the high computational demands. In this study, we developed parallel particle tracking algorithms to reveal how river flow variations affect the RTD of river water in the alluvial aquifer. Particle tracking was conducted using the velocity outputs generated by three-dimensional groundwater flow simulations of PFLOTRAN in a 1600 x 800 x 20m model domain within the DOE Hanford Site. Long-term monitoring data of inland well water levels and river stage were used for eight years of flow simulation. Nearly a half million particles were continually released along the river boundary to calculate the RTDs. Spectral analysis of the river stage data revealed high-frequency (sub-daily to weekly) river stage fluctuations caused by dam operations. The higher frequencies of stage variation were progressively filtered to generate multiple sets of flow boundary conditions. A series of flow simulations were performed by using the filtered flow boundary conditions and various degrees of subsurface heterogeneity to study the relative contribution of flow dynamics and physical heterogeneity on river water RTD. Our results revealed multimodal RTDs of river water as a result of the highly variable exchange pathways driven by interactions between dynamic flow and aquifer heterogeneity. A relationship between the RTD and frequency of flow variation was built for each heterogeneity structure, which can be used to assess the potential ecological consequences of dam operations in regulated rivers.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.
Spatially explicit spectral analysis of point clouds and geospatial data
Buscombe, Daniel D.
2015-01-01
The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described, and its functionality illustrated with an example of a high-resolution bathymetric point cloud data collected with multibeam echosounder.
Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model
NASA Technical Reports Server (NTRS)
Rogers, Kathy L.
1986-01-01
The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, M.; Anderson, D.P.
1988-01-01
Marionette is a system for distributed parallel programming in an environment of networked heterogeneous computer systems. It is based on a master/slave model. The master process can invoke worker operations (asynchronous remote procedure calls to single slaves) and context operations (updates to the state of all slaves). The master and slaves also interact through shared data structures that can be modified only by the master. The master and slave processes are programmed in a sequential language. The Marionette runtime system manages slave process creation, propagates shared data structures to slaves as needed, queues and dispatches worker and context operations, andmore » manages recovery from slave processor failures. The Marionette system also includes tools for automated compilation of program binaries for multiple architectures, and for distributing binaries to remote fuel systems. A UNIX-based implementation of Marionette is described.« less
Semantic integration of data on transcriptional regulation
Baitaluk, Michael; Ponomarenko, Julia
2010-01-01
Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517
Semantic integration of data on transcriptional regulation.
Baitaluk, Michael; Ponomarenko, Julia
2010-07-01
Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
Lung volume reduction of pulmonary emphysema: the radiologist task.
Milanese, Gianluca; Silva, Mario; Sverzellati, Nicola
2016-03-01
Several lung volume reduction (LVR) techniques have been increasingly evaluated in patients with advanced pulmonary emphysema, especially in the last decade. Radiologist plays a pivotal role in the characterization of parenchymal damage and, thus, assessment of eligibility criteria. This review aims to discuss the most common LVR techniques, namely LVR surgery, endobronchial valves, and coils LVR, with emphasis on the role of computed tomography (CT). Several trials have recently highlighted the importance of regional quantification of emphysema by computerized CT-based segmentation of hyperlucent parenchyma, which is strongly recommended for candidates to any LVR treatment. In particular, emphysema distribution pattern and fissures integrity are evaluated to tailor the choice of the most appropriate LVR technique. Furthermore, a number of CT measures have been tested for the personalization of treatment, according to imaging detected heterogeneity of parenchymal disease. CT characterization of heterogeneous parenchymal abnormalities provides criteria for selection of the preferable treatment in each patient and improves outcome of LVR as reflected by better quality of life, higher exercise tolerance, and lower mortality.
Genovese, Katia; Leeflang, Sander; Zadpoor, Amir A
2017-05-01
A custom-designed micro-digital image correlation system was used to track the evolution of the full-surface three-dimensional strain field of Ti6Al4V additively manufactured lattice samples under mechanical loading. The high-magnification capabilities of the method allowed to resolve the strain distribution down to the strut level and disclosed a highly heterogeneous mechanical response of the lattice structure with local strain concentrations well above the nominal global strain level. In particular, we quantified that strain heterogeneity appears at a very early stage of the deformation process and increases with load, showing a strain accumulation pattern with a clear correlation to the later onset of the fracture. The obtained results suggest that the unique opportunities offered by the proposed experimental method, in conjunction with analytical and computational models, could serve to provide novel important information for the rational design of additively manufactured porous biomaterials. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, marco
2010-01-01
The Refined Zigzag Theory (RZT) for homogeneous, laminated composite, and sandwich plates is presented from a multi-scale formalism starting with the inplane displacement field expressed as a superposition of coarse and fine contributions. The coarse kinematic field is that of first-order shear-deformation theory, whereas the fine kinematic field has a piecewise-linear zigzag distribution through the thickness. The condition of limiting homogeneity of transverse-shear properties is proposed and yields four distinct sets of zigzag functions. By examining elastostatic solutions for highly heterogeneous sandwich plates, the best-performing zigzag functions are identified. The RZT predictive capabilities to model homogeneous and highly heterogeneous sandwich plates are critically assessed, demonstrating its superior efficiency, accuracy ; and a wide range of applicability. The present theory, which is derived from the virtual work principle, is well-suited for developing computationally efficient CO-continuous finite elements, and is thus appropriate for the analysis and design of high-performance load-bearing aerospace structures.
Confronting the Paradox of Enrichment to the Metacommunity Perspective
Hauzy, Céline; Nadin, Grégoire; Canard, Elsa; Gounand, Isabelle; Mouquet, Nicolas; Ebenman, Bo
2013-01-01
Resource enrichment can potentially destabilize predator-prey dynamics. This phenomenon historically referred as the "paradox of enrichment" has mostly been explored in spatially homogenous environments. However, many predator-prey communities exchange organisms within spatially heterogeneous networks called metacommunities. This heterogeneity can result from uneven distribution of resources among communities and thus can lead to the spreading of local enrichment within metacommunities. Here, we adapted the original Rosenzweig-MacArthur predator-prey model, built to study the paradox of enrichment, to investigate the effect of regional enrichment and of its spatial distribution on predator-prey dynamics in metacommunities. We found that the potential for destabilization was depending on the connectivity among communities and the spatial distribution of enrichment. In one hand, we found that at low dispersal regional enrichment led to the destabilization of predator-prey dynamics. This destabilizing effect was more pronounced when the enrichment was uneven among communities. In the other hand, we found that high dispersal could stabilize the predator-prey dynamics when the enrichment was spatially heterogeneous. Our results illustrate that the destabilizing effect of enrichment can be dampened when the spatial scale of resource enrichment is lower than that of organismss movements (heterogeneous enrichment). From a conservation perspective, our results illustrate that spatial heterogeneity could decrease the regional extinction risk of species involved in specialized trophic interactions. From the perspective of biological control, our results show that the heterogeneous distribution of pest resource could favor or dampen outbreaks of pests and of their natural enemies, depending on the spatial scale of heterogeneity. PMID:24358242
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashouf, S; Lai, P; Karotki, A
2014-06-01
Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented usingmore » MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and Engineering Research Council of Canada.« less
NASA Astrophysics Data System (ADS)
Chen, X.; Murakami, H.; Hahn, M. S.; Hammond, G. E.; Rockhold, M. L.; Rubin, Y.
2010-12-01
Tracer testing under natural or forced gradient flow provides useful information for characterizing subsurface properties, by monitoring and modeling the tracer plume migration in a heterogeneous aquifer. At the Hanford 300 Area, non-reactive tracer experiments, in addition to constant-rate injection tests and electromagnetic borehole flowmeter (EBF) profiling, were conducted to characterize the heterogeneous hydraulic conductivity field. A Bayesian data assimilation technique, method of anchored distributions (MAD), is applied to assimilate the experimental tracer test data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation. In this study, the prior information of the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using the constant-rate injection tests and the EBF data. The posterior distribution of the random field is obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. The parallel three-dimensional flow and transport code PFLOTRAN is implemented to cope with the highly transient flow boundary conditions at the site and to meet the computational demand of the proposed method. The validation results show that the field conditioned on the tracer test data better reproduces the tracer transport behavior compared to the field characterized previously without the tracer test data. A synthetic study proves that the proposed method can effectively assimilate tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. These characterization results will improve conceptual models developed for the site, including reactive transport models. The study successfully demonstrates the capability of MAD to assimilate multi-scale multi-type field data within a consistent Bayesian framework. The MAD framework can potentially be applied to combine geophysical data with other types of data in site characterization.
Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)
NASA Astrophysics Data System (ADS)
Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.
2013-12-01
We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)
Web-GIS platform for monitoring and forecasting of regional climate and ecological changes
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.
2012-12-01
Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
Exploration of Heterogeneity in Distributed Research Network Drug Safety Analyses
ERIC Educational Resources Information Center
Hansen, Richard A.; Zeng, Peng; Ryan, Patrick; Gao, Juan; Sonawane, Kalyani; Teeter, Benjamin; Westrich, Kimberly; Dubois, Robert W.
2014-01-01
Distributed data networks representing large diverse populations are an expanding focus of drug safety research. However, interpreting results is difficult when treatment effect estimates vary across datasets (i.e., heterogeneity). In a previous study, risk estimates were generated for selected drugs and potential adverse outcomes. Analyses were…
Heterogeneous Integration Technology
2017-05-19
Distribution A. Approved for public release; distribution unlimited. (APRS-RY-17-0383) Heterogeneous Integration Technology Dr. Burhan...2013 and 2015 [4]. ...................................... 9 Figure 3: 3D integration of similar or diverse technology components follows More Moore and...10 Figure 4: Many different technologies are used in the implementation of modern microelectronics systems can benefit from
Heterogeneous distribution of metabolites across plant species
NASA Astrophysics Data System (ADS)
Takemoto, Kazuhiro; Arita, Masanori
2009-07-01
We investigate the distribution of flavonoids, a major category of plant secondary metabolites, across species. Flavonoids are known to show high species specificity, and were once considered as chemical markers for understanding adaptive evolution and characterization of living organisms. We investigate the distribution among species using bipartite networks, and find that two heterogeneous distributions are conserved among several families: the power-law distributions of the number of flavonoids in a species and the number of shared species of a particular flavonoid. In order to explain the possible origin of the heterogeneity, we propose a simple model with, essentially, a single parameter. As a result, we show that two respective power-law statistics emerge from simple evolutionary mechanisms based on a multiplicative process. These findings provide insights into the evolution of metabolite diversity and characterization of living organisms that defy genome sequence analysis for different reasons.
Privacy Preservation in Distributed Subgradient Optimization Algorithms.
Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng
2017-07-31
In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.
NASA Astrophysics Data System (ADS)
Zhu, J.; Winter, C. L.; Wang, Z.
2015-11-01
Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW (Modular Three-dimensional Finite-difference Groundwater Flow Model) simulation environment, and the PEST (parameter estimation) tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop lognormally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.
NASA Astrophysics Data System (ADS)
Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin
2016-06-01
CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.
NASA Astrophysics Data System (ADS)
Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2016-04-01
High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.
Tixier, Florent; Hatt, Mathieu; Le Rest, Catherine Cheze; Le Pogam, Adrien; Corcos, Laurent; Visvikis, Dimitris
2012-05-01
(18)F-FDG PET measurement of standardized uptake value (SUV) is increasingly used for monitoring therapy response and predicting outcome. Alternative parameters computed through textural analysis were recently proposed to quantify the heterogeneity of tracer uptake by tumors as a significant predictor of response. The primary objective of this study was to evaluate the reproducibility of these heterogeneity measurements. Double baseline (18)F-FDG PET scans were acquired within 4 d of each other for 16 patients before any treatment was considered. A Bland-Altman analysis was performed on 8 parameters based on histogram measurements and 17 parameters based on textural heterogeneity features after discretization with values between 8 and 128. The reproducibility of maximum and mean SUV was similar to that in previously reported studies, with a mean percentage difference of 4.7% ± 19.5% and 5.5% ± 21.2%, respectively. By comparison, better reproducibility was measured for some textural features describing local heterogeneity of tracer uptake, such as entropy and homogeneity, with a mean percentage difference of -2% ± 5.4% and 1.8% ± 11.5%, respectively. Several regional heterogeneity parameters such as variability in the intensity and size of regions of homogeneous activity distribution had reproducibility similar to that of SUV measurements, with 95% confidence intervals of -22.5% to 3.1% and -1.1% to 23.5%, respectively. These parameters were largely insensitive to the discretization range. Several parameters derived from textural analysis describing heterogeneity of tracer uptake by tumors on local and regional scales had reproducibility similar to or better than that of simple SUV measurements. These reproducibility results suggest that these (18)F-FDG PET-derived parameters, which have already been shown to have predictive and prognostic value in certain cancer models, may be used to monitor therapy response and predict patient outcome.
Tixier, Florent; Hatt, Mathieu; Le Rest, Catherine Cheze; Le Pogam, Adrien; Corcos, Laurent; Visvikis, Dimitris
2012-01-01
18F-FDG PET measurement of standardized uptake values (SUV) is increasingly used for monitoring therapy response or predicting outcome. Alternative parameters computed through textural analysis were recently proposed to quantify the tumor tracer uptake heterogeneity as significant predictors of response. The primary objective of this study was the evaluation of the reproducibility of these heterogeneity measurements. Methods Double-baseline 18F-FDG PET scans of 16 patients acquired within a period of 4 days prior to any treatment were considered. A Bland-Altman analysis was carried out on six parameters based on histogram measurements and 17 heterogeneity parameters based on textural features obtained after discretization with values between 8 and 128. Results SUVmax and SUVmean reproducibility were similar to previously reported studies with a mean percentage difference of 4.7±19.5% and 5.5±21.2% respectively. By comparison better reproducibility was measured for some of the textural features describing tumor tracer local heterogeneity, such as entropy and homogeneity with a mean percentage difference of −2±5.4% and 1.8±11.5% respectively. Several of the tumor regional heterogeneity parameters such as the variability in the intensity and size of homogeneous tumor activity distribution regions had similar reproducibility to the SUV measurements with 95% confidence intervals of −22.5% to 3.1% and −1.1% to 23.5% respectively. These parameters were largely insensitive to the discretization range values. Conclusion Several of the parameters derived from textural analysis describing tumor tracer heterogeneity at local and regional scales had similar or better reproducibility as simple SUV measurements. These reproducibility results suggest that these FDG PET image derived parameters which have already been shown to have a predictive and prognostic value in certain cancer models, may be used within the context of therapy response monitoring or predicting patient outcome. PMID:22454484
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, Alexei V., E-mail: chvetsov2@gmail.com; Schwartz, Jeffrey L.; Mayr, Nina
2014-06-15
Purpose: In our previous work, the authors showed that a distribution of cell surviving fractionsS{sub 2} in a heterogeneous group of patients could be derived from tumor-volume variation curves during radiotherapy for head and neck cancer. In this research study, the authors show that this algorithm can be applied to other tumors, specifically in nonsmall cell lung cancer. This new application includes larger patient volumes and includes comparison of data sets obtained at independent institutions. Methods: Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancermore » with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage computed tomography. Statistical distributions of cell surviving fractionsS{sub 2} and clearance half-lives of lethally damaged cells T{sub 1/2} have been reconstructed in each patient group by using a version of the two-level cell population model of tumor response and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Results: Nonsmall cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractionsS{sub 2} for nonsmall cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S{sub 2} reconstructed from tumor volume variation agree with the PDF measured in vitro. Conclusions: The data obtained in this work, when taken together with the data obtained previously for head and neck cancer, suggests that the cell surviving fractionsS{sub 2} can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.« less
Chvetsov, Alexei V; Yartsev, Slav; Schwartz, Jeffrey L; Mayr, Nina
2014-06-01
In our previous work, the authors showed that a distribution of cell surviving fractions S2 in a heterogeneous group of patients could be derived from tumor-volume variation curves during radiotherapy for head and neck cancer. In this research study, the authors show that this algorithm can be applied to other tumors, specifically in nonsmall cell lung cancer. This new application includes larger patient volumes and includes comparison of data sets obtained at independent institutions. Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancer with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage computed tomography. Statistical distributions of cell surviving fractions S2 and clearance half-lives of lethally damaged cells T(1/2) have been reconstructed in each patient group by using a version of the two-level cell population model of tumor response and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Nonsmall cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractions S2 for nonsmall cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S2 reconstructed from tumor volume variation agree with the PDF measured in vitro. The data obtained in this work, when taken together with the data obtained previously for head and neck cancer, suggests that the cell surviving fractions S2 can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.
NASA Astrophysics Data System (ADS)
Jawitz, J. W.; Basu, N.; Chen, X.
2007-05-01
Interwell application of coupled nonreactive and reactive tracers through aquifer contaminant source zones enables quantitative characterization of aquifer heterogeneity and contaminant architecture. Parameters obtained from tracer tests are presented here in a Lagrangian framework that can be used to predict the dissolution of nonaqueous phase liquid (NAPL) contaminants. Nonreactive tracers are commonly used to provide information about travel time distributions in hydrologic systems. Reactive tracers have more recently been introduced as a tool to quantify the amount of NAPL contaminant present within the tracer swept volume. Our group has extended reactive tracer techniques to also characterize NAPL spatial distribution heterogeneity. By conceptualizing the flow field through an aquifer as a collection of streamtubes, the aquifer hydrodynamic heterogeneities may be characterized by a nonreactive tracer travel time distribution, and NAPL spatial distribution heterogeneity may be similarly described using reactive travel time distributions. The combined statistics of these distributions are used to derive a simple analytical solution for contaminant dissolution. This analytical solution, and the tracer techniques used for its parameterization, were validated both numerically and experimentally. Illustrative applications are presented from numerical simulations using the multiphase flow and transport simulator UTCHEM, and laboratory experiments of surfactant-enhanced NAPL remediation in two-dimensional flow chambers.
Heterogeneous characters modeling of instant message services users’ online behavior
Fang, Yajun; Horn, Berthold
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327
Heterogeneous characters modeling of instant message services users' online behavior.
Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions
Kaur, Parminder; O’Connor, Peter B.
2008-01-01
Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354
Fornarelli, Francesco; Dadduzio, Ruggiero; Torresi, Marco; Camporeale, Sergio Mario; Fortunato, Bernardo
2018-02-01
A fully 3D unsteady Computational Fluid Dynamics (CFD) approach coupled with heterogeneous reaction chemistry is presented in order to study the behavior of a single square channel as part of a Lean [Formula: see text] Traps. The reliability of the numerical tool has been validated against literature data considering only active BaO site. Even though the input/output performance of such catalyst has been well known, here the spatial distribution within a single channel is investigated in details. The square channel geometry influences the flow field and the catalyst performance being the flow velocity distribution on the cross section non homogeneous. The mutual interaction between the flow and the active catalyst walls influences the spatial distribution of the volumetric species. Low velocity regions near the square corners and transversal secondary flows are shown in several cross-sections along the streamwise direction at different instants. The results shed light on the three-dimensional characteristic of both the flow field and species distribution within a single square channel of the catalyst with respect to 0-1D approaches.
NASA Astrophysics Data System (ADS)
Baglione, Enrico; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano
2017-04-01
The fact that ruptures on the generating faults of large earthquakes are strongly heterogeneous has been demonstrated over the last few decades by a large number of studies. The effort to retrieve reliable finite-fault models (FFMs) for large earthquakes occurred worldwide, mainly by means of the inversion of different kinds of geophysical data, has been accompanied in the last years by the systematic collection and format homogenisation of the published/proposed FFMs for different earthquakes into specifically conceived databases, such as SRCMOD. The main aim of this study is to explore characteristic patterns of the slip distribution of large earthquakes, by using a subset of the FFMs contained in SRCMOD, covering events with moment magnitude equal or larger than 6 and occurred worldwide over the last 25 years. We focus on those FFMs that exhibit a single and clear region of high slip (i.e. a single asperity), which is found to represent the majority of the events. For these FFMs, it sounds reasonable to best-fit the slip model by means of a 2D Gaussian distributions. Two different methods are used (least-square and highest-similarity) and correspondingly two "best-fit" indexes are introduced. As a result, two distinct 2D Gaussian distributions for each FFM are obtained. To quantify how well these distributions are able to mimic the original slip heterogeneity, we calculate and compare the vertical displacements at the Earth surface in the near field induced by the original FFM slip, by an equivalent uniform-slip model, by a depth-dependent slip model, and by the two "best" Gaussian slip models. The coseismic vertical surface displacement is used as the metric for comparison. Results show that, on average, the best results are the ones obtained with 2D Gaussian distributions based on similarity index fitting. Finally, we restrict our attention to those single-asperity FFMs associated to earthquakes which generated tsunamis. We choose few events for which tsunami data (water level time series and/or run-up measurements) are available. Using the results mentioned above, for each chosen event the coseismic vertical displacement fields computed for different slip distributions are used as initial conditions for numerical tsunami simulations, performed by means of the shallow-water code UBO-TSUFD. The comparison of the numerical results for different initial conditions to the experimental data is presented and discussed. This study was funded in the frame of the EU Project called ASTARTE - "Assessment, STrategy And Risk Reduction for Tsunamis in Europe", Grant 603839, 7th FP (ENV.2013.6.4-3).
Small-scale heterogeneity spectra in the Earth mantle resolved by PKP-ab,-bc and -df waves
NASA Astrophysics Data System (ADS)
Zheng, Y.
2016-12-01
Plate tectonics creates heterogeneities at mid ocean ridges and subducts the heterogeneities back to the mantle at subduction zones. Heterogeneities manifest themselves by different densities and seismic wave speeds. The length scales and spatial distribution of the heterogeneities measure the mixing mechanism of the plate tectonics. This information can be mathematically captured as the heterogeneity spatial Fourier spectrum. Since most heterogeneities created are on the order of 10s of km, global seismic tomography is not able to resolve them directly. Here, we use seismic P-waves that transmit through the outer core (phases: PKP-ab and PKP-bc) and through the inner core (PKP-df) to probe the lower-mantle heterogeneities. The differential traveltimes (PKP-ab versus PKP-df; PKP-bc versus PKP-df) are sensitive to lower mantle structures. We have collected more than 10,000 PKP phases recorded by Japan Hi-Net short-period seismic network. We found that the lower mantle was filled with seismic heterogeneities from scale 20km to 200km. The heterogeneity spectrum is similar to an exponential distribution but is more enriched in small-scale heterogeneities at the high-wavenumber end. The spectrum is "red" meaning large scales have more power and heterogeneities show a multiscale nature: small-scale heterogeneities are embedded in large-scale heterogeneities. These small-scale heterogeneities cannot be due to thermal origin and they must be compositional. If all these heterogeneities were located in the D" layer, statistically, it would have a root-mean-square P-wave velocity fluctuation of 1% (i.e., -3% to 3%).