Heterogeneity in Health Care Computing Environments
Sengupta, Soumitra
1989-01-01
This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Heterogeneous Distributed Computing for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy S.
1998-01-01
The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
Heterogeneous variances in multi-environment yield trials for corn hybrids
USDA-ARS?s Scientific Manuscript database
Recent developments in statistics and computing have enabled much greater levels of complexity in statistical models of multi-environment yield trial data. One particular feature of interest to breeders is simultaneously modeling heterogeneity of variances among environments and cultivars. Our obj...
Heterogeneous Systems for Information-Variable Environments (HIVE)
2017-05-01
ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information - Variable Environments (HIVE) by Amar...not return it to the originator. ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information ...Computational and Information Sciences Directorate, ARL Approved for public release; distribution is unlimited. ii REPORT
Methodologies and systems for heterogeneous concurrent computing
NASA Technical Reports Server (NTRS)
Sunderam, V. S.
1994-01-01
Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.
An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Mouse Driven Window Graphics for Network Teaching.
ERIC Educational Resources Information Center
Makinson, G. J.; And Others
Computer enhanced teaching of computational mathematics on a network system driving graphics terminals is being redeveloped for a mouse-driven, high resolution, windowed environment of a UNIX work station. Preservation of the features of networked access by heterogeneous terminals is provided by the use of the X Window environment. A dmonstrator…
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Dome: Distributed Object Migration Environment
1994-05-01
Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the
NASA Astrophysics Data System (ADS)
Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián
2015-11-01
The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.
Dedicated heterogeneous node scheduling including backfill scheduling
Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA
2006-07-25
A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.
Rinkevicius, Zilvinas; Li, Xin; Sandberg, Jaime A R; Mikkelsen, Kurt V; Ågren, Hans
2014-03-11
We introduce a density functional theory/molecular mechanical approach for computation of linear response properties of molecules in heterogeneous environments, such as metal surfaces or nanoparticles embedded in solvents. The heterogeneous embedding environment, consisting from metallic and nonmetallic parts, is described by combined force fields, where conventional force fields are used for the nonmetallic part and capacitance-polarization-based force fields are used for the metallic part. The presented approach enables studies of properties and spectra of systems embedded in or placed at arbitrary shaped metallic surfaces, clusters, or nanoparticles. The capability and performance of the proposed approach is illustrated by sample calculations of optical absorption spectra of thymidine absorbed on gold surfaces in an aqueous environment, where we study how different organizations of the gold surface and how the combined, nonadditive effect of the two environments is reflected in the optical absorption spectrum.
An Overview of MSHN: The Management System for Heterogeneous Networks
1999-04-01
An Overview of MSHN: The Management System for Heterogeneous Networks Debra A. Hensgen†, Taylor Kidd†, David St. John§, Matthew C . Schnaidt†, Howard...ABSTRACT UU 18. NUMBER OF PAGES 15 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE...Alhusaini, V. K. Prasanna, and C . S. Raghavendra, “A unified resource scheduling framework for heterogeneous computing environments,” Proc. 8th IEEE
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
2011-08-09
fastest 10 supercomputers in the world. Both systems rely on GPU co-processing, one using AMD cards, the second, called Nebulae , using NVIDIA Tesla...Page 9 of 10 UNCLASSIFIED capability of almost 3 petaflop/s, the highest in TOP500, Nebulae only holds the No. 2 position on the TOP500 list of the
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.
Additional Security Considerations for Grid Management
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.
2003-01-01
The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Overview of the LINCS architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.; Watson, R.W.
1982-01-13
Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less
DeCourcy, Kelly; Hostnik, Eric T; Lorbach, Josh; Knoblaugh, Sue
2016-12-01
An adult leopard gecko ( Eublepharis macularius ) presented for lethargy, hyporexia, weight loss, decreased passage of waste, and a palpable caudal coelomic mass. Computed tomography showed a heterogeneous hyperattenuating (∼143 Hounsfield units) structure within the right caudal coelom. The distal colon-coprodeum lumen or urinary bladder was hypothesized as the most likely location for the heterogeneous structure. Medical support consisted of warm water and lubricant enema, as well as a heated environment. Medical intervention aided the passage of a plug comprised centrally of cholesterol and urates with peripheral stratified layers of fibrin, macrophages, heterophils, and bacteria. Within 24 hr, a follow-up computed tomography scan showed resolution of the pelvic canal plug.
NASA Technical Reports Server (NTRS)
Voecks, G. E.
1983-01-01
Insufficient theoretical definition of heterogeneous catalysts is the major difficulty confronting industrial suppliers who seek catalyst systems which are more active, selective, and stable than those currently available. In contrast, progress was made in tailoring homogeneous catalysts to specific reactions because more is known about the reaction intermediates promoted and/or stabilized by these catalysts during the course of reaction. However, modeling heterogeneous catalysts on a microscopic scale requires compiling and verifying complex information on reaction intermediates and pathways. This can be achieved by adapting homogeneous catalyzed reaction intermediate species, applying theoretical quantum chemistry and computer technology, and developing a better understanding of heterogeneous catalyst system environments. Research in microscopic reaction modeling is now at a stage where computer modeling, supported by physical experimental verification, could provide information about the dynamics of the reactions that will lead to designing supported catalysts with improved selectivity and stability.
Campus-Wide Computing: Early Results Using Legion at the University of Virginia
2006-01-01
Bernard et al., “Primitives for Distributed Computing in a Heterogeneous Local Area Network Environ- ment”, IEEE Trans on Soft. Eng. vol. 15, no. 12...1994. [16] F. Ferstl, “CODINE Technical Overview,” Genias, April, 1993. [17] R. F. Freund and D. S. Cornwell , “Superconcurrency: A form of distributed
Beam Dynamics Simulation Platform and Studies of Beam Breakup in Dielectric Wakefield Structures
NASA Astrophysics Data System (ADS)
Schoessow, P.; Kanareykin, A.; Jing, C.; Kustov, A.; Altmark, A.; Gai, W.
2010-11-01
A particle-Green's function beam dynamics code (BBU-3000) to study beam breakup effects is incorporated into a parallel computing framework based on the Boinc software environment, and supports both task farming on a heterogeneous cluster and local grid computing. User access to the platform is through a web browser.
ERIC Educational Resources Information Center
Reyes Alamo, Jose M.
2010-01-01
The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…
Provably Secure Heterogeneous Access Control Scheme for Wireless Body Area Network.
Omala, Anyembe Andrew; Mbandu, Angolo Shem; Mutiria, Kamenyi Domenic; Jin, Chunhua; Li, Fagen
2018-04-28
Wireless body area network (WBAN) provides a medium through which physiological information could be harvested and transmitted to application provider (AP) in real time. Integrating WBAN in a heterogeneous Internet of Things (IoT) ecosystem would enable an AP to monitor patients from anywhere and at anytime. However, the IoT roadmap of interconnected 'Things' is still faced with many challenges. One of the challenges in healthcare is security and privacy of streamed medical data from heterogeneously networked devices. In this paper, we first propose a heterogeneous signcryption scheme where a sender is in a certificateless cryptographic (CLC) environment while a receiver is in identity-based cryptographic (IBC) environment. We then use this scheme to design a heterogeneous access control protocol. Formal security proof for indistinguishability against adaptive chosen ciphertext attack and unforgeability against adaptive chosen message attack in random oracle model is presented. In comparison with some of the existing access control schemes, our scheme has lower computation and communication cost.
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Multiprocessor programming environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.B.; Fornaro, R.
Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.
2010-04-20
An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.
Pandey, Vaibhav; Saini, Poonam
2018-06-01
MapReduce (MR) computing paradigm and its open source implementation Hadoop have become a de facto standard to process big data in a distributed environment. Initially, the Hadoop system was homogeneous in three significant aspects, namely, user, workload, and cluster (hardware). However, with growing variety of MR jobs and inclusion of different configurations of nodes in the existing cluster, heterogeneity has become an essential part of Hadoop systems. The heterogeneity factors adversely affect the performance of a Hadoop scheduler and limit the overall throughput of the system. To overcome this problem, various heterogeneous Hadoop schedulers have been proposed in the literature. Existing survey works in this area mostly cover homogeneous schedulers and classify them on the basis of quality of service parameters they optimize. Hence, there is a need to study the heterogeneous Hadoop schedulers on the basis of various heterogeneity factors considered by them. In this survey article, we first discuss different heterogeneity factors that typically exist in a Hadoop system and then explore various challenges that arise while designing the schedulers in the presence of such heterogeneity. Afterward, we present the comparative study of heterogeneous scheduling algorithms available in the literature and classify them by the previously said heterogeneity factors. Lastly, we investigate different methods and environment used for evaluation of discussed Hadoop schedulers.
Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks
NASA Astrophysics Data System (ADS)
Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar
2016-06-01
Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.
NASA Technical Reports Server (NTRS)
Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric
2004-01-01
Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Job Superscheduler Architecture and Performance in Computational Grid Environments
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak
2003-01-01
Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Meeting People's Needs in a Fully Interoperable Domotic Environment
Miori, Vittorio; Russo, Dario; Concordia, Cesare
2012-01-01
The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes ‘invisible’, as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space. PMID:22969322
Meeting people's needs in a fully interoperable domotic environment.
Miori, Vittorio; Russo, Dario; Concordia, Cesare
2012-01-01
The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes 'invisible', as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space.
A Domain-Specific Language for Aviation Domain Interoperability
ERIC Educational Resources Information Center
Comitz, Paul
2013-01-01
Modern information systems require a flexible, scalable, and upgradeable infrastructure that allows communication and collaboration between heterogeneous information processing and computing environments. Aviation systems from different organizations often use differing representations and distribution policies for the same data and messages,…
Habitat heterogeneity hypothesis and edge effects in model metacommunities.
Hamm, Michaela; Drossel, Barbara
2017-08-07
Spatial heterogeneity is an inherent property of any living environment and is expected to favour biodiversity due to a broader niche space. Furthermore, edges between different habitats can provide additional possibilities for species coexistence. Using computer simulations, this study examines metacommunities consisting of several trophic levels in heterogeneous environments in order to explore the above hypotheses on a community level. We model heterogeneous landscapes by using two different sized resource pools and evaluate the combined effect of dispersal and heterogeneity on local and regional species diversity. This diversity is obtained by running population dynamics and evaluating the robustness (i.e., the fraction of surviving species). The main results for regional robustness are in agreement with the habitat heterogeneity hypothesis, as the largest robustness is found in heterogeneous systems with intermediate dispersal rates. This robustness is larger than in homogeneous systems with the same total amount of resources. We study the edge effect by arranging the two types of resources in two homogeneous blocks. Different edge responses in diversity are observed, depending on dispersal strength. Local robustness is highest for edge habitats that contain the smaller amount of resource in combination with intermediate dispersal. The results show that dispersal is relevant to correctly identify edge responses on community level. Copyright © 2017 Elsevier Ltd. All rights reserved.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
Trust Model to Enhance Security and Interoperability of Cloud Environment
NASA Astrophysics Data System (ADS)
Li, Wenjuan; Ping, Lingdi
Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.
Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean
2016-08-01
Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
Law of Large Numbers: The Theory, Applications and Technology-Based Education
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas; Gould, Robert
2009-01-01
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information…
Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu
2017-04-05
Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.
ERIC Educational Resources Information Center
Crane, Earl Newell
2013-01-01
The research problem that inspired this effort is the challenge of managing the security of systems in large-scale heterogeneous networked environments. Human intervention is slow and limited: humans operate at much slower speeds than networked computer communications and there are few humans associated with each network. Enabling each node in the…
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1991-01-01
Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.
Performance of a Heterogeneous Grid Partitioner for N-body Applications
NASA Technical Reports Server (NTRS)
Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak
2003-01-01
An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.
Development of a change management system
NASA Technical Reports Server (NTRS)
Parks, Cathy Bonifas
1993-01-01
The complexity and interdependence of software on a computer system can create a situation where a solution to one problem causes failures in dependent software. In the computer industry, software problems arise and are often solved with 'quick and dirty' solutions. But in implementing these solutions, documentation about the solution or user notification of changes is often overlooked, and new problems are frequently introduced because of insufficient review or testing. These problems increase when numerous heterogeneous systems are involved. Because of this situation, a change management system plays an integral part in the maintenance of any multisystem computing environment. At the NASA Ames Advanced Computational Facility (ACF), the Online Change Management System (OCMS) was designed and developed to manage the changes being applied to its multivendor computing environment. This paper documents the research, design, and modifications that went into the development of this change management system (CMS).
Mobility in hospital work: towards a pervasive computing hospital environment.
Morán, Elisa B; Tentori, Monica; González, Víctor M; Favela, Jesus; Martínez-Garcia, Ana I
2007-01-01
Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
Open Source Live Distributions for Computer Forensics
NASA Astrophysics Data System (ADS)
Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele
Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
A Virtual Science Data Environment for Carbon Dioxide Observations
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Law, E.; Crichton, D. J.; Mattmann, C. A.; Gunson, M. R.; Braverman, A. J.; Nguyen, H. M.; Eldering, A.; Castano, R.; Osterman, G. B.
2011-12-01
Climate science data are often distributed cross-institutionally and made available using heterogeneous interfaces. With respect to observational carbon-dioxide (CO2) records, these data span across national as well as international institutions and are typically distributed using a variety of data standards. Such an arrangement can yield challenges from a research perspective, as users often need to independently aggregate datasets as well as address the issue of data quality. To tackle this dispersion and heterogeneity of data, we have developed the CO2 Virtual Science Data Environment - a comprehensive approach to virtually integrating CO2 data and metadata from multiple missions and providing a suite of computational services that facilitate analysis, comparison, and transformation of that data. The Virtual Science Environment provides climate scientists with a unified web-based destination for discovering relevant observational data in context, and supports a growing range of online tools and services for analyzing and transforming the available data to suit individual research needs. It includes web-based tools to geographically and interactively search for CO2 observations collected from multiple airborne, space, as well as terrestrial platforms. Moreover, the data analysis services it provides over the Internet, including offering techniques such as bias estimation and spatial re-gridding, move computation closer to the data and reduce the complexity of performing these operations repeatedly and at scale. The key to enabling these services, as well as consolidating the disparate data into a unified resource, has been to focus on leveraging metadata descriptors as the foundation of our data environment. This metadata-centric architecture, which leverages the Dublin Core standard, forgoes the need to replicate remote datasets locally. Instead, the system relies upon an extensive, metadata-rich virtual data catalog allowing on-demand browsing and retrieval of CO2 records from multiple missions. In other words, key metadata information about remote CO2 records is stored locally while the data itself is preserved at its respective archive of origin. This strategy has been made possible by our method of encapsulating the heterogeneous sources of data using a common set of web-based services, including services provided by Jet Propulsion Laboratory's Climate Data Exchange (CDX). Furthermore, this strategy has enabled us to scale across missions, and to provide access to a broad array of CO2 observational data. Coupled with on-demand computational services and an intuitive web-portal interface, the CO2 Virtual Science Data Environment effectively transforms heterogeneous CO2 records from multiple sources into a unified resource for scientific discovery.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
DAI-CLIPS: Distributed, Asynchronous, Interacting CLIPS
NASA Technical Reports Server (NTRS)
Gagne, Denis; Garant, Alain
1994-01-01
DAI-CLIPS is a distributed computational environment within which each CLIPS is an active independent computational entity with the ability to communicate freely with other CLIPS. Furthermore, new CLIPS can be created, others can be deleted or modify their expertise, all dynamically in an asynchronous and independent fashion during execution. The participating CLIPS are distributed over a network of heterogeneous processors taking full advantage of the available processing power. We present the general framework encompassing DAI-CLIPS and discuss some of its advantages and potential applications.
Mean field treatment of heterogeneous steady state kinetics
NASA Astrophysics Data System (ADS)
Geva, Nadav; Vaissier, Valerie; Shepherd, James; Van Voorhis, Troy
2017-10-01
We propose a method to quickly compute steady state populations of species undergoing a set of chemical reactions whose rate constants are heterogeneous. Using an average environment in place of an explicit nearest neighbor configuration, we obtain a set of equations describing a single fluctuating active site in the presence of an averaged bath. We apply this Mean Field Steady State (MFSS) method to a model of H2 production on a disordered surface for which the activation energy for the reaction varies from site to site. The MFSS populations quantitatively reproduce the KMC results across the range of rate parameters considered.
Data analysis environment (DASH2000) for the Subaru telescope
NASA Astrophysics Data System (ADS)
Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki
2000-06-01
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
Secure data exchange between intelligent devices and computing centers
NASA Astrophysics Data System (ADS)
Naqvi, Syed; Riguidel, Michel
2005-03-01
The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
Paralex: An Environment for Parallel Programming in Distributed Systems
1991-12-07
distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
Common Database Interface for Heterogeneous Software Engineering Tools.
1987-12-01
SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
A probabilistic approach to information retrieval in heterogeneous databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, A.; Segev, A.
During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed.more » Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.« less
Mi, Shichao; Han, Hui; Chen, Cailian; Yan, Jian; Guan, Xinping
2016-02-19
Heterogeneous wireless sensor networks (HWSNs) can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs) have more computation power, while relay nodes (RNs) with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS) to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Towards an Approach of Semantic Access Control for Cloud Computing
NASA Astrophysics Data System (ADS)
Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai
With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.
Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae
2014-01-01
Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications. PMID:24521942
Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae
2014-02-11
Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications.
An innovative multimodal virtual platform for communication with devices in a natural way
NASA Astrophysics Data System (ADS)
Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.
2012-03-01
As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.
NASA Astrophysics Data System (ADS)
Mavelli, Fabio; Ruiz-Mirazo, Kepa
2010-09-01
'ENVIRONMENT' is a computational platform that has been developed in the last few years with the aim to simulate stochastically the dynamics and stability of chemically reacting protocellular systems. Here we present and describe some of its main features, showing how the stochastic kinetics approach can be applied to study the time evolution of reaction networks in heterogeneous conditions, particularly when supramolecular lipid structures (micelles, vesicles, etc) coexist with aqueous domains. These conditions are of special relevance to understand the origins of cellular, self-reproducing compartments, in the context of prebiotic chemistry and evolution. We contrast our simulation results with real lab experiments, with the aim to bring together theoretical and experimental research on protocell and minimal artificial cell systems.
On salesmen and tourists: Two-step optimization in deterministic foragers
NASA Astrophysics Data System (ADS)
Maya, Miguel; Miramontes, Octavio; Boyer, Denis
2017-02-01
We explore a two-step optimization problem in random environments, the so-called restaurant-coffee shop problem, where a walker aims at visiting the nearest and better restaurant in an area and then move to the nearest and better coffee-shop. This is an extension of the Tourist Problem, a one-step optimization dynamics that can be viewed as a deterministic walk in a random medium. A certain amount of heterogeneity in the values of the resources to be visited causes the emergence of power-laws distributions for the steps performed by the walker, similarly to a Lévy flight. The fluctuations of the step lengths tend to decrease as a consequence of multiple-step planning, thus reducing the foraging uncertainty. We find that the first and second steps of each planned movement play very different roles in heterogeneous environments. The two-step process improves only slightly the foraging efficiency compared to the one-step optimization, at a much higher computational cost. We discuss the implications of these findings for animal and human mobility, in particular in relation to the computational effort that informed agents should deploy to solve search problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel
2018-01-01
This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less
Conformational Heterogeneity of Bax Helix 9 Dimer for Apoptotic Pore Formation
NASA Astrophysics Data System (ADS)
Liao, Chenyi; Zhang, Zhi; Kale, Justin; Andrews, David W.; Lin, Jialing; Li, Jianing
2016-07-01
Helix α9 of Bax protein can dimerize in the mitochondrial outer membrane (MOM) and lead to apoptotic pores. However, it remains unclear how different conformations of the dimer contribute to the pore formation on the molecular level. Thus we have investigated various conformational states of the α9 dimer in a MOM model — using computer simulations supplemented with site-specific mutagenesis and crosslinking of the α9 helices. Our data not only confirmed the critical membrane environment for the α9 stability and dimerization, but also revealed the distinct lipid-binding preference of the dimer in different conformational states. In our proposed pathway, a crucial iso-parallel dimer that mediates the conformational transition was discovered computationally and validated experimentally. The corroborating evidence from simulations and experiments suggests that, helix α9 assists Bax activation via the dimer heterogeneity and interactions with specific MOM lipids, which eventually facilitate proteolipidic pore formation in apoptosis regulation.
Using Computing and Data Grids for Large-Scale Science and Engineering
NASA Technical Reports Server (NTRS)
Johnston, William E.
2001-01-01
We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica
Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.
Community-driven computational biology with Debian Linux.
Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles
2010-12-21
The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.
A survey of CPU-GPU heterogeneous computing techniques
Mittal, Sparsh; Vetter, Jeffrey S.
2015-07-04
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
A survey of CPU-GPU heterogeneous computing techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
NASA Astrophysics Data System (ADS)
Lyon, Ellen Beth
1998-09-01
This research project investigated the influence of homogeneous (like-ability) review pairs coupled with heterogeneous (mixed-ability) cooperative learning groups using computer-assisted instruction (CAI) on academic achievement and attitude toward science in eighth grade Earth science students. Subjects were placed into academic quartiles (Hi, Med-Hi, Med-Lo, and Lo) based on achievement. Cooperative learning groups of four (one student from each academic quartile) were formed in all classes, within which students completed CAI through a software package entitled Geoscience Education Through Interactive Technology, or GETITspTM. Each day, when computer activities were completed, students in the experimental classes were divided into homogeneous review pairs to review their work. The students in the control classes were divided into heterogeneous review pairs to review their work. The effects of the experimental treatment were measured by pretest, posttest, and delayed posttest measures, by pre- and post-student attitude scales, and by evaluation of amendments students made to their work during the time spent in review pairs. Results showed that student achievement was not significantly influenced by placement in homogeneous or heterogeneous review pairs, regardless of academic quartile assignment. Student attitude toward science as a school subject did not change significantly due to experimental treatment. Achievement retention of students in experimental and control groups within each quartile showed no significant difference. Notebook amendment patterns showed some significant differences in a few categories. For the Hi quartile, there were significant differences in numbers of deletion amendments and substitution amendments between the experimental and the control group. In both cases, subjects in the experimental group (homogeneous review pairs) made greater number of amendments then those in the control group (heterogeneous review pairs). For the Lo quartile, there was a significant difference in the number of grammar/usage/mechanics (GUM) amendments between the experimental and control groups. The experimental group made far more GUM amendments than the control group. This research highlights the fact that many factors may influence a successful learning environment in which CAI is successfully implemented. Educational research projects should be designed and used to help teachers create learning environments in which CAI is maximized.
Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes
2012-01-01
Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785
A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation
NASA Astrophysics Data System (ADS)
da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille
2012-03-01
Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.
Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven
2010-11-01
The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.
EpiK: A Knowledge Base for Epidemiological Modeling and Analytics of Infectious Diseases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, S. M. Shamimul; Fox, Edward A.; Bisset, Keith
Computational epidemiology seeks to develop computational methods to study the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Recent advances in computing and data sciences have led to the development of innovative modeling environments to support this important goal. The datasets used to drive the dynamic models as well as the data produced by these models presents unique challenges owing to their size, heterogeneity and diversity. These datasets form the basis of effective and easy to use decision support and analytical environments. Asmore » a result, it is important to develop scalable data management systems to store, manage and integrate these datasets. In this paper, we develop EpiK—a knowledge base that facilitates the development of decision support and analytical environments to support epidemic science. An important goal is to develop a framework that links the input as well as output datasets to facilitate effective spatio-temporal and social reasoning that is critical in planning and intervention analysis before and during an epidemic. The data management framework links modeling workflow data and its metadata using a controlled vocabulary. The metadata captures information about storage, the mapping between the linked model and the physical layout, and relationships to support services. EpiK is designed to support agent-based modeling and analytics frameworks—aggregate models can be seen as special cases and are thus supported. We use semantic web technologies to create a representation of the datasets that encapsulates both the location and the schema heterogeneity. The choice of RDF as a representation language is motivated by the diversity and growth of the datasets that need to be integrated. A query bank is developed—the queries capture a broad range of questions that can be posed and answered during a typical case study pertaining to disease outbreaks. The queries are constructed using SPARQL Protocol and RDF Query Language (SPARQL) over the EpiK. EpiK can hide schema and location heterogeneity while efficiently supporting queries that span the computational epidemiology modeling pipeline: from model construction to simulation output. As a result, we show that the performance of benchmark queries varies significantly with respect to the choice of hardware underlying the database and resource description framework (RDF) engine.« less
EpiK: A Knowledge Base for Epidemiological Modeling and Analytics of Infectious Diseases
Hasan, S. M. Shamimul; Fox, Edward A.; Bisset, Keith; ...
2017-11-06
Computational epidemiology seeks to develop computational methods to study the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Recent advances in computing and data sciences have led to the development of innovative modeling environments to support this important goal. The datasets used to drive the dynamic models as well as the data produced by these models presents unique challenges owing to their size, heterogeneity and diversity. These datasets form the basis of effective and easy to use decision support and analytical environments. Asmore » a result, it is important to develop scalable data management systems to store, manage and integrate these datasets. In this paper, we develop EpiK—a knowledge base that facilitates the development of decision support and analytical environments to support epidemic science. An important goal is to develop a framework that links the input as well as output datasets to facilitate effective spatio-temporal and social reasoning that is critical in planning and intervention analysis before and during an epidemic. The data management framework links modeling workflow data and its metadata using a controlled vocabulary. The metadata captures information about storage, the mapping between the linked model and the physical layout, and relationships to support services. EpiK is designed to support agent-based modeling and analytics frameworks—aggregate models can be seen as special cases and are thus supported. We use semantic web technologies to create a representation of the datasets that encapsulates both the location and the schema heterogeneity. The choice of RDF as a representation language is motivated by the diversity and growth of the datasets that need to be integrated. A query bank is developed—the queries capture a broad range of questions that can be posed and answered during a typical case study pertaining to disease outbreaks. The queries are constructed using SPARQL Protocol and RDF Query Language (SPARQL) over the EpiK. EpiK can hide schema and location heterogeneity while efficiently supporting queries that span the computational epidemiology modeling pipeline: from model construction to simulation output. As a result, we show that the performance of benchmark queries varies significantly with respect to the choice of hardware underlying the database and resource description framework (RDF) engine.« less
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
NASA Astrophysics Data System (ADS)
Negrut, Dan; Lamb, David; Gorsich, David
2011-06-01
This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
A weighted U statistic for association analyses considering genetic heterogeneity.
Wei, Changshuai; Elston, Robert C; Lu, Qing
2016-07-20
Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
DeBeer, Chris M.; Pomeroy, John W.
2017-10-01
The spatial heterogeneity of mountain snow cover and ablation is important in controlling patterns of snow cover depletion (SCD), meltwater production, and runoff, yet is not well-represented in most large-scale hydrological models and land surface schemes. Analyses were conducted in this study to examine the influence of various representations of snow cover and melt energy heterogeneity on both simulated SCD and stream discharge from a small alpine basin in the Canadian Rocky Mountains. Simulations were performed using the Cold Regions Hydrological Model (CRHM), where point-scale snowmelt computations were made using a snowpack energy balance formulation and applied to spatial frequency distributions of snow water equivalent (SWE) on individual slope-, aspect-, and landcover-based hydrological response units (HRUs) in the basin. Hydrological routines were added to represent the vertical and lateral transfers of water through the basin and channel system. From previous studies it is understood that the heterogeneity of late winter SWE is a primary control on patterns of SCD. The analyses here showed that spatial variation in applied melt energy, mainly due to differences in net radiation, has an important influence on SCD at multiple scales and basin discharge, and cannot be neglected without serious error in the prediction of these variables. A single basin SWE distribution using the basin-wide mean SWE (SWE ‾) and coefficient of variation (CV; standard deviation/mean) was found to represent the fine-scale spatial heterogeneity of SWE sufficiently well. Simulations that accounted for differences in (SWE ‾) among HRUs but neglected the sub-HRU heterogeneity of SWE were found to yield similar discharge results as simulations that included this heterogeneity, while SCD was poorly represented, even at the basin level. Finally, applying point-scale snowmelt computations based on a single SWE depth for each HRU (thereby neglecting spatial differences in internal snowpack energetics over the distributions) was found to yield similar SCD and discharge results as simulations that resolved internal energy differences. Spatial/internal snowpack melt energy effects are more pronounced at times earlier in spring before the main period of snowmelt and SCD, as shown in previously published work. The paper discusses the importance of these findings as they apply to the warranted complexity of snowmelt process simulation in cold mountain environments, and shows how the end-of-winter SWE distribution represents an effective means of resolving snow cover heterogeneity at multiple scales for modelling, even in steep and complex terrain.
Fick's second law transformed: one path to cloaking in mass diffusion.
Guenneau, S; Puvirajesinghe, T M
2013-06-06
Here, we adapt the concept of transformational thermodynamics, whereby the flux of temperature is controlled via anisotropic heterogeneous diffusivity, for the diffusion and transport of mass concentration. The n-dimensional, time-dependent, anisotropic heterogeneous Fick's equation is considered, which is a parabolic partial differential equation also applicable to heat diffusion, when convection occurs, for example, in fluids. This theory is illustrated with finite-element computations for a liposome particle surrounded by a cylindrical multi-layered cloak in a water-based environment, and for a spherical multi-layered cloak consisting of layers of fluid with an isotropic homogeneous diffusivity, deduced from an effective medium approach. Initial potential applications could be sought in bioengineering.
Precision Medicine and PET/Computed Tomography: Challenges and Implementation.
Subramaniam, Rathan M
2017-01-01
Precision Medicine is about selecting the right therapy for the right patient, at the right time, specific to the molecular targets expressed by disease or tumors, in the context of patient's environment and lifestyle. Some of the challenges for delivery of precision medicine in oncology include biomarkers for patient selection for enrichment-precision diagnostics, mapping out tumor heterogeneity that contributes to therapy failures, and early therapy assessment to identify resistance to therapies. PET/computed tomography offers solutions in these important areas of challenges and facilitates implementation of precision medicine. Copyright © 2016 Elsevier Inc. All rights reserved.
Heterogeneous Embedded Real-Time Systems Environment
2003-12-01
AFRL-IF-RS-TR-2003-290 Final Technical Report December 2003 HETEROGENEOUS EMBEDDED REAL - TIME SYSTEMS ENVIRONMENT Integrated...HETEROGENEOUS EMBEDDED REAL - TIME SYSTEMS ENVIRONMENT 6. AUTHOR(S) Cosmo Castellano and James Graham 5. FUNDING NUMBERS C - F30602-97-C-0259
Exascale computing and what it means for shock physics
NASA Astrophysics Data System (ADS)
Germann, Timothy
2015-06-01
The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Community-driven computational biology with Debian Linux
2010-01-01
Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Applications of the pipeline environment for visual informatics and genomics computations
2011-01-01
Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
Continuum and discrete approach in modeling biofilm development and structure: a review.
Mattei, M R; Frunzo, L; D'Acunto, B; Pechaud, Y; Pirozzi, F; Esposito, G
2018-03-01
The scientific community has recognized that almost 99% of the microbial life on earth is represented by biofilms. Considering the impacts of their sessile lifestyle on both natural and human activities, extensive experimental activity has been carried out to understand how biofilms grow and interact with the environment. Many mathematical models have also been developed to simulate and elucidate the main processes characterizing the biofilm growth. Two main mathematical approaches for biomass representation can be distinguished: continuum and discrete. This review is aimed at exploring the main characteristics of each approach. Continuum models can simulate the biofilm processes in a quantitative and deterministic way. However, they require a multidimensional formulation to take into account the biofilm spatial heterogeneity, which makes the models quite complicated, requiring significant computational effort. Discrete models are more recent and can represent the typical multidimensional structural heterogeneity of biofilm reflecting the experimental expectations, but they generate computational results including elements of randomness and introduce stochastic effects into the solutions.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
Intelligent Agents for the Digital Battlefield
1998-11-01
specific outcome of our long term research will be the development of a collaborative agent technology system, CATS , that will provide the underlying...software infrastructure needed to build large, heterogeneous, distributed agent applications. CATS will provide a software environment through which multiple...intelligent agents may interact with other agents, both human and computational. In addition, CATS will contain a number of intelligent agent components that will be useful for a wide variety of applications.
2013-07-01
structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Vermeeren, G; Gosselin, M C; Kühn, S; Kellerman, V; Hadjem, A; Gati, A; Joseph, W; Wiart, J; Meyer, F; Kuster, N; Martens, L
2010-09-21
The environment is an important parameter when evaluating the exposure to radio-frequency electromagnetic fields. This study investigates numerically the variation on the whole-body and peak spatially averaged-specific absorption rate (SAR) in the heterogeneous virtual family male placed in front of a base station antenna in a reflective environment. The SAR values in a reflective environment are also compared to the values obtained when no environment is present (free space). The virtual family male has been placed at four distances (30 cm, 1 m, 3 m and 10 m) in front of six base station antennas (operating at 300 MHz, 450 MHz, 900 MHz, 2.1 GHz, 3.5 GHz and 5.0 GHz, respectively) and in three reflective environments (a perfectly conducting wall, a perfectly conducting ground and a perfectly conducting ground + wall). A total of 72 configurations are examined. The absorption in the heterogeneous body model is determined using the 3D electromagnetic (EM) finite-difference time-domain (FDTD) solver Semcad-X. For the larger simulations, requirements in terms of computer resources are reduced by using a generalized Huygens' box approach. It has been observed that the ratio of the SAR in the virtual family male in a reflective environment and the SAR in the virtual family male in the free-space environment ranged from -8.7 dB up to 8.0 dB. A worst-case reflective environment could not be determined. ICNIRP reference levels not always showed to be compliant with the basic restrictions.
Research of G3-PLC net self-organization processes in the NS-3 modeling framework
NASA Astrophysics Data System (ADS)
Pospelova, Irina; Chebotayev, Pavel; Klimenko, Aleksey; Myakochin, Yuri; Polyakov, Igor; Shelupanov, Alexander; Zykov, Dmitriy
2017-11-01
When modern infocommunication networks are designed, the combination of several data transfer channels is widely used. It is necessary for the purposes of improvement in quality and robustness of communication. Communication systems based on more than one data transfer channel are named heterogeneous communication systems. For the design of a heterogeneous network, the most optimal solution is the use of mesh technology. Mesh technology ensures message delivery to the destination under conditions of unpredictable interference environment situation in each of two channels. Therewith, one of the high-priority problems is the choice of a routing protocol when the mesh networks are designed. An important design stage for any computer network is modeling. Modeling allows us to design a few different variants of design solutions and also to compute all necessary functional specifications for each of these solutions. As a result, it allows us to reduce costs for the physical realization of a network. In this article the research of dynamic routing in the NS3 simulation modeling framework is presented. The article contains an evaluation of simulation modeling applicability in solving the problem of heterogeneous networks design. Results of modeling may be afterwards used for physical realization of this kind of networks.
Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2006-01-01
This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Computer-assisted engineering data base
NASA Technical Reports Server (NTRS)
Dube, R. P.; Johnson, H. R.
1983-01-01
General capabilities of data base management technology are described. Information requirements posed by the space station life cycle are discussed, and it is asserted that data base management technology supporting engineering/manufacturing in a heterogeneous hardware/data base management system environment should be applied to meeting these requirements. Today's commercial systems do not satisfy all of these requirements. The features of an R&D data base management system being developed to investigate data base management in the engineering/manufacturing environment are discussed. Features of this system represent only a partial solution to space station requirements. Areas where this system should be extended to meet full space station information management requirements are discussed.
Grid-wide neuroimaging data federation in the context of the NeuroLOG project
Michel, Franck; Gaignard, Alban; Ahmad, Farooq; Barillot, Christian; Batrancourt, Bénédicte; Dojat, Michel; Gibaud, Bernard; Girard, Pascal; Godard, David; Kassel, Gilles; Lingrand, Diane; Malandain, Grégoire; Montagnat, Johan; Pélégrini-Issac, Mélanie; Pennec, Xavier; Rojas Balderrama, Javier; Wali, Bacem
2010-01-01
Grid technologies are appealing to deal with the challenges raised by computational neurosciences and support multi-centric brain studies. However, core grids middleware hardly cope with the complex neuroimaging data representation and multi-layer data federation needs. Moreover, legacy neuroscience environments need to be preserved and cannot be simply superseded by grid services. This paper describes the NeuroLOG platform design and implementation, shedding light on its Data Management Layer. It addresses the integration of brain image files, associated relational metadata and neuroscience semantic data in a heterogeneous distributed environment, integrating legacy data managers through a mediation layer. PMID:20543431
2013-11-01
big data with R is relatively new. RHadoop is a mature product from Revolution Analytics that uses R with Hadoop Streaming [15] and provides...agnostic all- data summaries or computations, in which case we use MapReduce directly. 2.3 D&R Software Environment In this work, we use the Hadoop ...job scheduling and tracking, data distribu- tion, system architecture, heterogeneity, and fault-tolerance. Hadoop also provides a distributed key-value
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.
2015-01-01
We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.
Carrying capacity in a heterogeneous environment with habitat connectivity.
Zhang, Bo; Kula, Alex; Mack, Keenan M L; Zhai, Lu; Ryce, Arrix L; Ni, Wei-Ming; DeAngelis, Donald L; Van Dyken, J David
2017-09-01
A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments. © 2017 John Wiley & Sons Ltd/CNRS.
Carrying capacity in a heterogeneous environment with habitat connectivity
Zhang, Bo; Kula, Alex; Mack, Keenan M.L.; Zhai, Lu; Ryce, Arrix L.; Ni, Wei-Ming; DeAngelis, Donald L.; Van Dyken, J. David
2017-01-01
A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments.
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
Autonomic Management of Application Workflows on Hybrid Computing Infrastructure
Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...
2011-01-01
In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Phenotypically heterogeneous populations in spatially heterogeneous environments
NASA Astrophysics Data System (ADS)
Patra, Pintu; Klumpp, Stefan
2014-03-01
The spatial expansion of a population in a nonuniform environment may benefit from phenotypic heterogeneity with interconverting subpopulations using different survival strategies. We analyze the crossing of an antibiotic-containing environment by a bacterial population consisting of rapidly growing normal cells and slow-growing, but antibiotic-tolerant persister cells. The dynamics of crossing is characterized by mean first arrival times and is found to be surprisingly complex. It displays three distinct regimes with different scaling behavior that can be understood based on an analytical approximation. Our results suggest that a phenotypically heterogeneous population has a fitness advantage in nonuniform environments and can spread more rapidly than a homogeneous population.
A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU
NASA Astrophysics Data System (ADS)
Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha
2018-03-01
Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
Atomistic calculations of interface elastic properties in noncoherent metallic bilayers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mi Changwen; Jun, Sukky; Kouris, Demitris A.
2008-02-15
The paper describes theoretical and computational studies associated with the interface elastic properties of noncoherent metallic bicrystals. Analytical forms of interface energy, interface stresses, and interface elastic constants are derived in terms of interatomic potential functions. Embedded-atom method potentials are then incorporated into the model to compute these excess thermodynamics variables, using energy minimization in a parallel computing environment. The proposed model is validated by calculating surface thermodynamic variables and comparing them with preexisting data. Next, the interface elastic properties of several fcc-fcc bicrystals are computed. The excess energies and stresses of interfaces are smaller than those on free surfacesmore » of the same crystal orientations. In addition, no negative values of interface stresses are observed. Current results can be applied to various heterogeneous materials where interfaces assume a prominent role in the systems' mechanical behavior.« less
Nagoski, Emily; Janssen, Erick; Lohrmann, David; Nichols, Eric
2012-08-01
Risky sexual behaviors, including the decision to have unprotected sex, result from interactions between individuals and their environment. The current study explored the use of Agent-Based Modeling (ABM)-a methodological approach in which computer-generated artificial societies simulate human sexual networks-to assess the influence of heterogeneity of sexual motivation on the risk of contracting HIV. The models successfully simulated some characteristics of human sexual systems, such as the relationship between individual differences in sexual motivation (sexual excitation and inhibition) and sexual risk, but failed to reproduce the scale-free distribution of number of partners observed in the real world. ABM has the potential to inform intervention strategies that target the interaction between an individual and his or her social environment.
Aumiller, William M; Davis, Bradley W; Hashemian, Negar; Maranas, Costas; Armaou, Antonios; Keating, Christine D
2014-03-06
The intracellular environment in which biological reactions occur is crowded with macromolecules and subdivided into microenvironments that differ in both physical properties and chemical composition. The work described here combines experimental and computational model systems to help understand the consequences of this heterogeneous reaction media on the outcome of coupled enzyme reactions. Our experimental model system for solution heterogeneity is a biphasic polyethylene glycol (PEG)/sodium citrate aqueous mixture that provides coexisting PEG-rich and citrate-rich phases. Reaction kinetics for the coupled enzyme reaction between glucose oxidase (GOX) and horseradish peroxidase (HRP) were measured in the PEG/citrate aqueous two-phase system (ATPS). Enzyme kinetics differed between the two phases, particularly for the HRP. Both enzymes, as well as the substrates glucose and H2O2, partitioned to the citrate-rich phase; however, the Amplex Red substrate necessary to complete the sequential reaction partitioned strongly to the PEG-rich phase. Reactions in ATPS were quantitatively described by a mathematical model that incorporated measured partitioning and kinetic parameters. The model was then extended to new reaction conditions, i.e., higher enzyme concentration. Both experimental and computational results suggest mass transfer across the interface is vital to maintain the observed rate of product formation, which may be a means of metabolic regulation in vivo. Although outcomes for a specific system will depend on the particulars of the enzyme reactions and the microenvironments, this work demonstrates how coupled enzymatic reactions in complex, heterogeneous media can be understood in terms of a mathematical model.
Kang, Jungho; Kim, Mansik; Park, Jong Hyuk
2016-01-01
With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms. PMID:27399699
Kang, Jungho; Kim, Mansik; Park, Jong Hyuk
2016-07-05
With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms.
Thin client performance for remote 3-D image display.
Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin
2003-01-01
Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Shape and Color Features for Object Recognition Search
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.
2012-01-01
A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Applying Utility Functions to Adaptation Planning for Home Automation Applications
NASA Astrophysics Data System (ADS)
Bratskas, Pyrros; Paspallis, Nearchos; Kakousis, Konstantinos; Papadopoulos, George A.
A pervasive computing environment typically comprises multiple embedded devices that may interact together and with mobile users. These users are part of the environment, and they experience it through a variety of devices embedded in the environment. This perception involves technologies which may be heterogeneous, pervasive, and dynamic. Due to the highly dynamic properties of such environments, the software systems running on them have to face problems such as user mobility, service failures, or resource and goal changes which may happen in an unpredictable manner. To cope with these problems, such systems must be autonomous and self-managed. In this chapter we deal with a special kind of a ubiquitous environment, a smart home environment, and introduce a user-preference-based model for adaptation planning. The model, which dynamically forms a set of configuration plans for resources, reasons automatically and autonomously, based on utility functions, on which plan is likely to best achieve the user's goals with respect to resource availability and user needs.
Higher rates of sex evolve in spatially heterogeneous environments.
Becks, Lutz; Agrawal, Aneil F
2010-11-04
The evolution and maintenance of sexual reproduction has puzzled biologists for decades. Although this field is rich in hypotheses, experimental evidence is scarce. Some important experiments have demonstrated differences in evolutionary rates between sexual and asexual populations; other experiments have documented evolutionary changes in phenomena related to genetic mixing, such as recombination and selfing. However, direct experiments of the evolution of sex within populations are extremely rare (but see ref. 12). Here we use the rotifer, Brachionus calyciflorus, which is capable of both sexual and asexual reproduction, to test recent theory predicting that there is more opportunity for sex to evolve in spatially heterogeneous environments. Replicated experimental populations of rotifers were maintained in homogeneous environments, composed of either high- or low-quality food habitats, or in heterogeneous environments that consisted of a mix of the two habitats. For populations maintained in either type of homogeneous environment, the rate of sex evolves rapidly towards zero. In contrast, higher rates of sex evolve in populations experiencing spatially heterogeneous environments. The data indicate that the higher level of sex observed under heterogeneity is not due to sex being less costly or selection against sex being less efficient; rather sex is sufficiently advantageous in heterogeneous environments to overwhelm its inherent costs. Counter to some alternative theories for the evolution of sex, there is no evidence that genetic drift plays any part in the evolution of sex in these populations.
Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.
Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P
2015-11-01
We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.
Mission Planning for Heterogeneous UxVs Operating in a Post-Disaster Urban Environment
2017-09-01
FOR HETEROGENEOUS UxVs OPERATING IN A POST -DISASTER URBAN ENVIRONMENT by Choon Seng Leon Mark Tan September 2017 Thesis Advisor: Oleg...September 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MISSION PLANNING FOR HETEROGENEOUS UxVs OPERATING IN A POST ...UxVs OPERATING IN A POST -DISASTER URBAN ENVIRONMENT Choon Seng Leon Mark Tan Civilian Engineer, ST Aerospace Ltd., Singapore B. Eng (Hons
Nakamura, Ryoji; Kachi, N; Suzuki, J-I
2010-05-01
We investigated the growth of and soil exploration by Lolium perenne under a heterogeneous environment before its roots reached a nutrient-rich patch. Temporal changes in the distribution of inorganic nitrogen, i.e., NO(3)(-)-N and NH(4)(+)-N, in the heterogeneous environment during the experimental period were also examined. The results showed that roots randomly explored soil, irrespective of the patchy distribution of inorganic nitrogen and differences in the chemical composition of inorganic nitrogen distribution between heterogeneous and homogeneous environments. We have also elucidated the potential effects of patch duration and inorganic nitrogen distribution on soil exploration by roots and thus on plant growth.
Dodge, Somayeh; Bohrer, Gil; Weinzierl, Rolf P.; Davidson, Sarah C.; Kays, Roland; Douglas, David C.; Cruz, Sebastian; Han, J.; Brandes, David; Wikelski, Martin
2013-01-01
The movement of animals is strongly influenced by external factors in their surrounding environment such as weather, habitat types, and human land use. With advances in positioning and sensor technologies, it is now possible to capture animal locations at high spatial and temporal granularities. Likewise, scientists have an increasing access to large volumes of environmental data. Environmental data are heterogeneous in source and format, and are usually obtained at different spatiotemporal scales than movement data. Indeed, there remain scientific and technical challenges in developing linkages between the growing collections of animal movement data and the large repositories of heterogeneous remote sensing observations, as well as in the developments of new statistical and computational methods for the analysis of movement in its environmental context. These challenges include retrieval, indexing, efficient storage, data integration, and analytical techniques.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT
NASA Technical Reports Server (NTRS)
Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.
1999-01-01
This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions.
Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros
2014-06-25
The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions
2014-01-01
Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802
Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model
NASA Technical Reports Server (NTRS)
Rogers, Kathy L.
1986-01-01
The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Explorative search of distributed bio-data to answer complex biomedical questions
2014-01-01
Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278
Grid commerce, market-driven G-negotiation, and Grid resource management.
Sim, Kwang Mong
2006-12-01
Although the management of resources is essential for realizing a computational grid, providing an efficient resource allocation mechanism is a complex undertaking. Since Grid providers and consumers may be independent bodies, negotiation among them is necessary. The contribution of this paper is showing that market-driven agents (MDAs) are appropriate tools for Grid resource negotiation. MDAs are e-negotiation agents designed with the flexibility of: 1) making adjustable amounts of concession taking into account market rivalry, outside options, and time preferences and 2) relaxing bargaining terms in the face of intense pressure. A heterogeneous testbed consisting of several types of e-negotiation agents to simulate a Grid computing environment was developed. It compares the performance of MDAs against other e-negotiation agents (e.g., Kasbah) in a Grid-commerce environment. Empirical results show that MDAs generally achieve: 1) higher budget efficiencies in many market situations than other e-negotiation agents in the testbed and 2) higher success rates in acquiring Grid resources under high Grid loadings.
State-of-the-art in Heterogeneous Computing
Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...
2010-01-01
Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
A distributed program composition system
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1989-01-01
A graphical technique for creating distributed computer programs is investigated and a prototype implementation is described which serves as a testbed for the concepts. The type of programs under examination is restricted to those comprising relatively heavyweight parts that intercommunicate by passing messages of typed objects. Such programs are often presented visually as a directed graph with computer program parts as the nodes and communication channels as the edges. This class of programs, called parts-based programs, is not well supported by existing computer systems; much manual work is required to describe the program to the system, establish the communication paths, accommodate the heterogeneity of data types, and to locate the parts of the program on the various systems involved. The work described solves most of these problems by providing an interface for describing parts-based programs in this class in a way that closely models the way programmers think about them: using sketches of diagraphs. Program parts, the computational modes of the larger program system are categorized in libraries and are accessed with browsers. The process of programming has the programmer draw the program graph interactively. Heterogeneity is automatically accommodated by the insertion of type translators where necessary between the parts. Many decisions are necessary in the creation of a comprehensive tool for interactive creation of programs in this class. Possibilities are explored and the issues behind such decisions are presented. An approach to program composition is described, not a carefully implemented programming environment. However, a prototype implementation is described that can demonstrate the ideas presented.
Vygotsky and Papert: social-cognitive interactions within Logo environments.
Mevarech, Z R; Kramarski, B
1993-02-01
The purpose of this study was to examine the effects of co-operative and individualised Logo environments on creativity and interpersonal relationships regarding academic recognition and social acceptance. Participants were 83 students who studied in three eighth grade classrooms: one was exposed to a co-operative Logo environment (N = 30), the other to an individualised Logo environment (N = 24), and the third served as a non-treatment control group (N = 29). Results showed that students in the cooperative Logo environment outperformed their counterparts in the other two groups on certain measures of creativity (figurative-originality, verbal-flexibility, and verbal-originality). In addition, the co-operative Logo group developed more positive interpersonal relationships than the students in the other two settings. The results are discussed from three perspectives: the social-cognitive approach emphasising the roles of co-operation and metacognition in developing advanced thinking skills; the educational-technology viewpoint demonstrating the potential use of computers; and the pedagogical view pointing out the implications of the study to school situations and heterogeneous classrooms.
clubber: removing the bioinformatics bottleneck in big data analyses.
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2017-06-13
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.
clubber: removing the bioinformatics bottleneck in big data analyses
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2018-01-01
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295
Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures
Zavodszky, Maria I.
2017-01-01
Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
Optimization of over-provisioned clouds
NASA Astrophysics Data System (ADS)
Balashov, N.; Baranov, A.; Korenkov, V.
2016-09-01
The functioning of modern applications in cloud-centers is characterized by a huge variety of computational workloads generated. This causes uneven workload distribution and as a result leads to ineffective utilization of cloud-centers' hardware. The proposed article addresses the possible ways to solve this issue and demonstrates that it is a matter of necessity to optimize cloud-centers' hardware utilization. As one of the possible ways to solve the problem of the inefficient resource utilization in heterogeneous cloud-environments an algorithm of dynamic re-allocation of virtual resources is suggested.
NASA Astrophysics Data System (ADS)
Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.
2016-06-01
We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.
Computing at DESY — current setup, trends and strategic directions
NASA Astrophysics Data System (ADS)
Ernst, Michael
1998-05-01
Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)
2000-01-01
The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.
A system for distributed intrusion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snapp, S.R.; Brentano, J.; Dias, G.V.
1991-01-01
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less
Implementing Internet of Things in a military command and control environment
NASA Astrophysics Data System (ADS)
Raglin, Adrienne; Metu, Somiya; Russell, Stephen; Budulas, Peter
2017-05-01
While the term Internet of Things (IoT) has been coined relatively recently, it has deep roots in multiple other areas of research including cyber-physical systems, pervasive and ubiquitous computing, embedded systems, mobile ad-hoc networks, wireless sensor networks, cellular networks, wearable computing, cloud computing, big data analytics, and intelligent agents. As the Internet of Things, these technologies have created a landscape of diverse heterogeneous capabilities and protocols that will require adaptive controls to effect linkages and changes that are useful to end users. In the context of military applications, it will be necessary to integrate disparate IoT devices into a common platform that necessarily must interoperate with proprietary military protocols, data structures, and systems. In this environment, IoT devices and data will not be homogeneous and provenance-controlled (i.e. single vendor/source/supplier owned). This paper presents a discussion of the challenges of integrating varied IoT devices and related software in a military environment. A review of contemporary commercial IoT protocols is given and as a practical example, a middleware implementation is proffered that provides transparent interoperability through a proactive message dissemination system. The implementation is described as a framework through which military applications can integrate and utilize commercial IoT in conjunction with existing military sensor networks and command and control (C2) systems.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
NASA Astrophysics Data System (ADS)
Nag, A.; Mahapatra, D. Roy; Gopalakrishnan, S.
2003-10-01
A hierarchical Genetic Algorithm (GA) is implemented in a high peformance spectral finite element software for identification of delaminations in laminated composite beams. In smart structural health monitoring, the number of delaminations (or any other modes of damage) as well as their locations and sizes are no way completely known. Only known are the healthy structural configuration (mass, stiffness and damping matrices updated from previous phases of monitoring), sensor measurements and some information about the load environment. To handle such enormous complexity, a hierarchical GA is used to represent heterogeneous population consisting of damaged structures with different number of delaminations and their evolution process to identify the correct damage configuration in the structures under monitoring. We consider this similarity with the evolution process in heterogeneous population of species in nature to develop an automated procedure to decide on what possible damaged configuration might have produced the deviation in the measured signals. Computational efficiency of the identification task is demonstrated by considering a single delamination. The behavior of fitness function in GA, which is an important factor for fast convergence, is studied for single and multiple delaminations. Several advantages of the approach in terms of computational cost is discussed. Beside tackling different other types of damage configurations, further scope of research for development of hybrid soft-computing modules are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-01-01
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices. PMID:28926957
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-09-16
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
A software architecture for multidisciplinary applications: Integrating task and data parallelism
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.P.; Bangs, A.L.; Butler, P.L.
Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. Themore » design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.« less
Using genetic data to estimate diffusion rates in heterogeneous landscapes.
Roques, L; Walker, E; Franck, P; Soubeyrand, S; Klein, E K
2016-08-01
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
Speciation reversal and biodiversity dynamics with hybridization in changing environments.
Seehausen, Ole; Takimoto, Gaku; Roy, Denis; Jokela, Jukka
2008-01-01
A considerable fraction of the world's biodiversity is of recent evolutionary origin and has evolved as a by-product of, and is maintained by, divergent adaptation in heterogeneous environments. Conservationists have paid attention to genetic homogenization caused by human-induced translocations (e.g. biological invasions and stocking), and to the importance of environmental heterogeneity for the ecological coexistence of species. However, far less attention has been paid to the consequences of loss of environmental heterogeneity to the genetic coexistence of sympatric species. Our review of empirical observations and our theoretical considerations on the causes and consequences of interspecific hybridization suggest that a loss of environmental heterogeneity causes a loss of biodiversity through increased genetic admixture, effectively reversing speciation. Loss of heterogeneity relaxes divergent selection and removes ecological barriers to gene flow between divergently adapted species, promoting interspecific introgressive hybridization. Since heterogeneity of natural environments is rapidly deteriorating in most biomes, the evolutionary ecology of speciation reversal ought to be fully integrated into conservation biology.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
Fasani, Rick A; Livi, Carolina B; Choudhury, Dipanwita R; Kleensang, Andre; Bouhifd, Mounir; Pendse, Salil N; McMullen, Patrick D; Andersen, Melvin E; Hartung, Thomas; Rosenberg, Michael
2015-01-01
The Human Toxome Project is part of a long-term vision to modernize toxicity testing for the 21st century. In the initial phase of the project, a consortium of six academic, commercial, and government organizations has partnered to map pathways of toxicity, using endocrine disruption as a model hazard. Experimental data is generated at multiple sites, and analyzed using a range of computational tools. While effectively gathering, managing, and analyzing the data for high-content experiments is a challenge in its own right, doing so for a growing number of -omics technologies, with larger data sets, across multiple institutions complicates the process. Interestingly, one of the most difficult, ongoing challenges has been the computational collaboration between the geographically separate institutions. Existing solutions cannot handle the growing heterogeneous data, provide a computational environment for consistent analysis, accommodate different workflows, and adapt to the constantly evolving methods and goals of a research project. To meet the needs of the project, we have created and managed The Human Toxome Collaboratorium, a shared computational environment hosted on third-party cloud services. The Collaboratorium provides a familiar virtual desktop, with a mix of commercial, open-source, and custom-built applications. It shares some of the challenges of traditional information technology, but with unique and unexpected constraints that emerge from the cloud. Here we describe the problems we faced, the current architecture of the solution, an example of its use, the major lessons we learned, and the future potential of the concept. In particular, the Collaboratorium represents a novel distribution method that could increase the reproducibility and reusability of results from similar large, multi-omic studies.
NASA Astrophysics Data System (ADS)
Zhu, J.; Winter, C. L.; Wang, Z.
2015-08-01
Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River Basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow-paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW simulation environment, and the PEST tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop log-normally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow-paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
A proto-Data Processing Center for LISA
NASA Astrophysics Data System (ADS)
Cavet, Cécile; Petiteau, Antoine; Le Jeune, Maude; Plagnol, Eric; Marin-Martholaz, Etienne; Bayle, Jean-Baptiste
2017-05-01
The LISA project preparation requires to study and define a new data analysis framework, capable of dealing with highly heterogeneous CPU needs and of exploiting the emergent information technologies. In this context, a prototype of the mission’s Data Processing Center (DPC) has been initiated. The DPC is designed to efficiently manage computing constraints and to offer a common infrastructure where the whole collaboration can contribute to development work. Several tools such as continuous integration (CI) have already been delivered to the collaboration and are presently used for simulations and performance studies. This article presents the progress made regarding this collaborative environment and discusses also the possible next steps towards an on-demand computing infrastructure. This activity is supported by CNES as part of the French contribution to LISA.
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Injectivity Evaluation for Offshore CO 2 Sequestration in Marine Sediments
Dai, Zhenxue; Zhang, Ye; Stauffer, Philip; ...
2017-08-18
Global and regional climate change caused by greenhouse gases emissions has stimulated interest in developing various technologies (such as carbon dioxide (CO 2) geologic sequestration in brine reservoirs) to reduce the concentrations of CO 2 in the atmosphere. Our study develops a statistical framework to identify gravitational CO 2 trapping processes and to quantitatively evaluate both CO 2 injectivity (or storage capacity) and leakage potential from marine sediments which exhibit heterogeneous permeability and variable thicknesses. Here, we focus on sets of geostatistically-based heterogeneous models populated with fluid flow parameters from several reservoir sites in the U.S. Gulf of Mexico (GOM).more » A computationally efficient uncertainty quantification study was conducted with results suggesting that permeability heterogeneity and anisotropy, seawater depth, and sediment thickness can all significantly impact CO 2 flow and trapping. Large permeability/porosity heterogeneity can enhance gravitational, capillary, and dissolution trapping, which acts to deter CO 2 upward migration and subsequent leakage onto the seafloor. When log permeability variance is 5, self-sealing with heterogeneity-enhanced gravitation trapping can be achieved even when water depth is 1.2 km. This extends the previously identified self-sealing condition that water depth be greater than 2.7 km. Our results have yielded valuable insight into the conditions under which safe storage of CO 2 can be achieved in offshore environments. The developed statistical framework is general and can be adapted to study other offshore sites worldwide.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Zhifeng; Liu, Chongxuan; Liu, Yuanyuan
Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models, and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical propertiesmore » was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.« less
Injectivity Evaluation for Offshore CO 2 Sequestration in Marine Sediments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Zhenxue; Zhang, Ye; Stauffer, Philip
Global and regional climate change caused by greenhouse gases emissions has stimulated interest in developing various technologies (such as carbon dioxide (CO 2) geologic sequestration in brine reservoirs) to reduce the concentrations of CO 2 in the atmosphere. Our study develops a statistical framework to identify gravitational CO 2 trapping processes and to quantitatively evaluate both CO 2 injectivity (or storage capacity) and leakage potential from marine sediments which exhibit heterogeneous permeability and variable thicknesses. Here, we focus on sets of geostatistically-based heterogeneous models populated with fluid flow parameters from several reservoir sites in the U.S. Gulf of Mexico (GOM).more » A computationally efficient uncertainty quantification study was conducted with results suggesting that permeability heterogeneity and anisotropy, seawater depth, and sediment thickness can all significantly impact CO 2 flow and trapping. Large permeability/porosity heterogeneity can enhance gravitational, capillary, and dissolution trapping, which acts to deter CO 2 upward migration and subsequent leakage onto the seafloor. When log permeability variance is 5, self-sealing with heterogeneity-enhanced gravitation trapping can be achieved even when water depth is 1.2 km. This extends the previously identified self-sealing condition that water depth be greater than 2.7 km. Our results have yielded valuable insight into the conditions under which safe storage of CO 2 can be achieved in offshore environments. The developed statistical framework is general and can be adapted to study other offshore sites worldwide.« less
NASA Astrophysics Data System (ADS)
Yan, Zhifeng; Liu, Chongxuan; Liu, Yuanyuan; Bailey, Vanessa L.
2017-11-01
Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical properties was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.
ADAPTIVE-GRID SIMULATION OF GROUNDWATER FLOW IN HETEROGENEOUS AQUIFERS. (R825689C068)
The prediction of contaminant transport in porous media requires the computation of the flow velocity. This work presents a methodology for high-accuracy computation of flow in a heterogeneous isotropic formation, employing a dual-flow formulation and adaptive...
Maintenance of ventricular fibrillation in heterogeneous ventricle.
Arevalo, Hamenegild J; Trayanova, Natalia A
2006-01-01
Although ventricular fibrillation (VF) is the prevalent cause of sudden cardiac death, the mechanisms that underlie VF remain elusive. One possible explanation is that VF is driven by a single robust rotor that is the source of wavefronts that break-up due to functional heterogeneities. Previous 2D computer simulations have proposed that a heterogeneity in background potassium current (IK1) can serve as the substrate for the formation of mother rotor activity. This study incorporates IK1 heterogeneity between the left and right ventricle in a realistic 3D rabbit ventricle model to examine its effects on the organization of VF. Computer simulations show that the IK1 heterogeneity contributes to the initiation and maintenance of VF by providing regions of different refractoriness which serves as sites of wave break and rotor formation. A single rotor that drives the fibrillatory activity in the ventricle is not found in this study. Instead, multiple sites of reentry are recorded throughout the ventricle. Calculation of dominant frequencies for each myocardial node yields no significant difference between the dominant frequency of the LV and the RV. The 3D computer simulations suggest that IK1 spatial heterogeneity alone can not lead to the formation of a stable rotor.
The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL
NASA Astrophysics Data System (ADS)
Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.
2017-03-01
It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.
Zhong, Qing; Rüschoff, Jan H.; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J.; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J.; Rupp, Niels J.; Fankhauser, Christian; Buhmann, Joachim M.; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A.; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C.; Jochum, Wolfram; Wild, Peter J.
2016-01-01
Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility. PMID:27052161
Zhong, Qing; Rüschoff, Jan H; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J; Rupp, Niels J; Fankhauser, Christian; Buhmann, Joachim M; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C; Jochum, Wolfram; Wild, Peter J
2016-04-07
Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.
2018-01-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B
2017-06-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.
2010-01-01
Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053
Spatial heterogeneity lowers rather than increases host-parasite specialization.
Hesse, E; Best, A; Boots, M; Hall, A R; Buckling, A
2015-09-01
Abiotic environmental heterogeneity can promote the evolution of diverse resource specialists, which in turn may increase the degree of host-parasite specialization. We coevolved Pseudomonas fluorescens and lytic phage ϕ2 in spatially structured populations, each consisting of two interconnected subpopulations evolving in the same or different nutrient media (homogeneous and heterogeneous environments, respectively). Counter to the normal expectation, host-parasite specialization was significantly lower in heterogeneous compared with homogeneous environments. This result could not be explained by dispersal homogenizing populations, as this would have resulted in the heterogeneous treatments having levels of specialization equal to or greater than that of the homogeneous environments. We argue that selection for costly generalists is greatest when the coevolving species are exposed to diverse environmental conditions and that this can provide an explanation for our results. A simple coevolutionary model of this process suggests that this can be a general mechanism by which environmental heterogeneity can reduce rather than increase host-parasite specialization. © 2015 The Authors. J. EVOL. BIOL. Journal of Evolutionary Biology Published by John Wiley & Sons Ltd on Behalf of European Society for Evolutionary Biology.
Hellmann, Christine; Große-Stoltenberg, André; Thiele, Jan; Oldeland, Jens; Werner, Christiane
2017-06-23
Spatial heterogeneity of ecosystems crucially influences plant performance, while in return plant feedbacks on their environment may increase heterogeneous patterns. This is of particular relevance for exotic plant invaders that transform native ecosystems, yet, approaches integrating geospatial information of environmental heterogeneity and plant-plant interaction are lacking. Here, we combined remotely sensed information of site topography and vegetation cover with a functional tracer of the N cycle, δ 15 N. Based on the case study of the invasion of an N 2 -fixing acacia in a nutrient-poor dune ecosystem, we present the first model that can successfully predict (R 2 = 0.6) small-scale spatial variation of foliar δ 15 N in a non-fixing native species from observed geospatial data. Thereby, the generalized additive mixed model revealed modulating effects of heterogeneous environments on invader impacts. Hence, linking remote sensing techniques with tracers of biological processes will advance our understanding of the dynamics and functioning of spatially structured heterogeneous systems from small to large spatial scales.
Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.
Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M
2015-11-01
Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
ARIANNA: A research environment for neuroimaging studies in autism spectrum disorders.
Retico, Alessandra; Arezzini, Silvia; Bosco, Paolo; Calderoni, Sara; Ciampa, Alberto; Coscetti, Simone; Cuomo, Stefano; De Santis, Luca; Fabiani, Dario; Fantacci, Maria Evelina; Giuliano, Alessia; Mazzoni, Enrico; Mercatali, Pietro; Miscali, Giovanni; Pardini, Massimiliano; Prosperi, Margherita; Romano, Francesco; Tamburini, Elena; Tosetti, Michela; Muratori, Filippo
2017-08-01
The complexity and heterogeneity of Autism Spectrum Disorders (ASD) require the implementation of dedicated analysis techniques to obtain the maximum from the interrelationship among many variables that describe affected individuals, spanning from clinical phenotypic characterization and genetic profile to structural and functional brain images. The ARIANNA project has developed a collaborative interdisciplinary research environment that is easily accessible to the community of researchers working on ASD (https://arianna.pi.infn.it). The main goals of the project are: to analyze neuroimaging data acquired in multiple sites with multivariate approaches based on machine learning; to detect structural and functional brain characteristics that allow the distinguishing of individuals with ASD from control subjects; to identify neuroimaging-based criteria to stratify the population with ASD to support the future development of personalized treatments. Secure data handling and storage are guaranteed within the project, as well as the access to fast grid/cloud-based computational resources. This paper outlines the web-based architecture, the computing infrastructure and the collaborative analysis workflows at the basis of the ARIANNA interdisciplinary working environment. It also demonstrates the full functionality of the research platform. The availability of this innovative working environment for analyzing clinical and neuroimaging information of individuals with ASD is expected to support researchers in disentangling complex data thus facilitating their interpretation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parikh, Priti P; Minning, Todd A; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H; Sahoo, Satya S; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P
2012-01-01
Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.
2016-12-01
It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.
NASA Astrophysics Data System (ADS)
Gonzales, H. B.; Ravi, S.; Li, J. J.; Sankey, J. B.
2016-12-01
Hydrological and aeolian processes control the redistribution of soil and nutrients in arid and semi arid environments thereby contributing to the formation of heterogeneous patchy landscapes with nutrient-rich resource islands surrounded by nutrient depleted bare soil patches. The differential trapping of soil particles by vegetation canopies may result in textural changes beneath the vegetation, which, in turn, can alter the hydrological processes such as infiltration and runoff. We conducted infiltration experiments and soil grain size analysis of several shrub (Larrea tridentate) and grass (Bouteloua eriopoda) microsites and in a heterogeneous landscape in the Chihuahuan desert (New Mexico, USA). Our results indicate heterogeneity in soil texture and infiltration patterns under grass and shrub microsites. We assessed the trapping effectiveness of vegetation canopies using a novel computational fluid dynamics (CFD) approach. An open-source software (OpenFOAM) was used to validate the data gathered from particle size distribution (PSD) analysis of soil within the shrub and grass microsites and their porosities (91% for shrub and 68% for grass) determined using terrestrial LiDAR surveys. Three-dimensional architectures of the shrub and grass were created using an open-source computer-aided design (CAD) software (Blender). The readily available solvers within the OpenFOAM architecture were modified to test the validity and optimize input parameters in assessing trapping efficiencies of sparse vegetation against aeolian sediment flux. The results from the numerical simulations explained the observed textual changes under grass and shrub canopies and highlighted the role of sediment trapping by canopies in structuring patch-scale hydrological processes.
Decaf: Decoupled Dataflows for In Situ High-Performance Workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreher, M.; Peterka, T.
Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less
Rethinking the evolution of specialization: A model for the evolution of phenotypic heterogeneity.
Rubin, Ilan N; Doebeli, Michael
2017-12-21
Phenotypic heterogeneity refers to genetically identical individuals that express different phenotypes, even when in the same environment. Traditionally, "bet-hedging" in fluctuating environments is offered as the explanation for the evolution of phenotypic heterogeneity. However, there are an increasing number of examples of microbial populations that display phenotypic heterogeneity in stable environments. Here we present an evolutionary model of phenotypic heterogeneity of microbial metabolism and a resultant theory for the evolution of phenotypic versus genetic specialization. We use two-dimensional adaptive dynamics to track the evolution of the population phenotype distribution of the expression of two metabolic processes with a concave trade-off. Rather than assume a Gaussian phenotype distribution, we use a Beta distribution that is capable of describing genotypes that manifest as individuals with two distinct phenotypes. Doing so, we find that environmental variation is not a necessary condition for the evolution of phenotypic heterogeneity, which can evolve as a form of specialization in a stable environment. There are two competing pressures driving the evolution of specialization: directional selection toward the evolution of phenotypic heterogeneity and disruptive selection toward genetically determined specialists. Because of the lack of a singular point in the two-dimensional adaptive dynamics and the fact that directional selection is a first order process, while disruptive selection is of second order, the evolution of phenotypic heterogeneity dominates and often precludes speciation. We find that branching, and therefore genetic specialization, occurs mainly under two conditions: the presence of a cost to maintaining a high phenotypic variance or when the effect of mutations is large. A cost to high phenotypic variance dampens the strength of selection toward phenotypic heterogeneity and, when sufficiently large, introduces a singular point into the evolutionary dynamics, effectively guaranteeing eventual branching. Large mutations allow the second order disruptive selection to dominate the first order selection toward phenotypic heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Moving Object Detection in Heterogeneous Conditions in Embedded Systems.
Garbo, Alessandro; Quer, Stefano
2017-07-01
This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates.
Moving Object Detection in Heterogeneous Conditions in Embedded Systems
Garbo, Alessandro
2017-01-01
This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates. PMID:28671582
Population dynamics on heterogeneous bacterial substrates
NASA Astrophysics Data System (ADS)
Mobius, Wolfram; Murray, Andrew W.; Nelson, David R.
2012-02-01
How species invade new territories and how these range expansions influence the population's genotypes are important questions in the field of population genetics. The majority of work addressing these questions focuses on homogeneous environments. Much less is known about the population dynamics and population genetics when the environmental conditions are heterogeneous in space. To better understand range expansions in two-dimensional heterogeneous environments, we employ a system of bacteria and bacteriophage, the viruses of bacteria. Thereby, the bacteria constitute the environment in which a population of bacteriophages expands. The spread of phage constitutes itself in lysis of bacteria and thus formation of clear regions on bacterial lawns, called plaques. We study the population dynamics and genetics of the expanding page for various patterns of environments.
Weis, Jerome J.; Madrigal, Daniel S.; Cardinale, Bradley J.
2008-01-01
Background One of the most common questions addressed by ecologists over the past decade has been-how does species richness impact the production of community biomass? Recent summaries of experiments have shown that species richness tends to enhance the production of biomass across a wide range of trophic groups and ecosystems; however, the biomass of diverse polycultures only rarely exceeds that of the single most productive species in a community (a phenomenon called ‘transgressive overyielding’). Some have hypothesized that the lack of transgressive overyielding is because experiments have generally been performed in overly-simplified, homogeneous environments where species have little opportunity to express the niche differences that lead to ‘complementary’ use of resources that can enhance biomass production. We tested this hypothesis in a laboratory experiment where we manipulated the richness of freshwater algae in homogeneous and heterogeneous nutrient environments. Methodology/Principal Findings Experimental units were comprised of patches containing either homogeneous nutrient ratios (16∶1 nitrogen to phosphorus (N∶P) in all patches) or heterogeneous nutrient ratios (ranging from 4∶1 to 64∶1 N∶P across patches). After allowing 6–10 generations of algal growth, we found that algal species richness had similar impacts on biomass production in both homo- and heterogeneous environments. Although four of the five algal species showed a strong response to nutrient heterogeneity, a single species dominated algal communities in both types of environments. As a result, a ‘selection effect’–where diversity maximizes the chance that a competitively superior species will be included in, and dominate the biomass of a community–was the primary mechanism by which richness influenced biomass in both homo- and heterogeneous environments. Conclusions/Significance Our study suggests that spatial heterogeneity, by itself, is not sufficient to generate strong effects of biodiversity on productivity. Rather, heterogeneity must be coupled with variation in the relative fitness of species across patches in order for spatial niche differentiation to generate complementary resource use. PMID:18665221
Viscosity Measurement using Drop Coalescence in Microgravity
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin; Maxwell, Daniel
1999-01-01
We present in here details of a new method, using drop coalescence, for application in microgravity environment for determining the viscosity of highly viscous undercooled liquids. The method has the advantage of eliminating heterogeneous nucleation at container walls caused by crystallization of undercooled liquids during processing. Also, due to the rapidity of the measurement, homogeneous nucleation would be avoided. The technique relies on both a highly accurate solution to the Navier-Stokes equations as well as on data gathered from experiments conducted in near zero gravity environment. The liquid viscosity is determined by allowing the computed free surface shape relaxation time to be adjusted in response to the measured free surface velocity of two coalescing drops. Results are presented from two validation experiments of the method which were conducted recently on board the NASA KC-135 aircraft. In these tests the viscosity of a highly viscous liquid, such as glycerine at different temperatures, was determined to reasonable accuracy using the liquid coalescence method. The experiments measured the free surface velocity of two glycerine drops coalescing under the action of surface tension alone in low gravity environment using high speed photography. The free surface velocity was then compared with the computed values obtained from different viscosity values. The results of these experiments were found to agree reasonably well with the calculated values.
Object-oriented analysis and design: a methodology for modeling the computer-based patient record.
Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L
1998-08-01
The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.; Yu, G.; Wang, K.
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
A Cloud-Based Internet of Things Platform for Ambient Assisted Living
Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto
2014-01-01
A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application. PMID:25093343
A cloud-based Internet of Things platform for ambient assisted living.
Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto
2014-08-04
A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Ohl, Ricky
In this case study, computer supported argument visualisation has been applied to the analysis and representation of the draft South East Queensland Regional Plan Consultation discourse, demonstrating how argument mapping can help deliver the transparency and accountability required in participatory democracy. Consultative democracy for regional planning falls into a category of problems known as “wicked problems”. Inherent in this environment is heterogeneous viewpoints, agendas and voices, built on disparate and often contradictory logic. An argument ontology and notation that was designed specifically to deal with consultative urban planning around wicked problems is the Issue Based Information System (IBIS) and IBIS notation (Rittel & Webber, 1984). The software used for argument visualisation in this case was Compendium, a derivative of IBIS. The high volume of stakeholders and discourse heterogeneity in this environment calls for a unique approach to argument mapping. The map design model developed from this research has been titled a “Consultation Map”. The design incorporates the IBIS ontology within a hybrid of mapping approaches, amalgamating elements from concept, dialogue, argument, debate, thematic and tree-mapping. The consultation maps developed from the draft South East Queensland Regional Plan Consultation provide a transparent visual record to give evidence of the themes of citizen issues within the consultation discourse. The consultation maps also link the elicited discourse themes to related policies from the SEQ Regional Plan providing explicit evidence of SEQ Regional Plan policy-decisions matching citizen concerns. The final consultation map in the series provides explicit links between SEQ Regional Plan policy items and monitoring activities reporting on the ongoing implementation of the SEQ Regional Plan. This map provides updatable evidence of and accountability for SEQ Regional Plan policy implementation and developments.
Implementation Issues of Adaptive Energy Detection in Heterogeneous Wireless Networks
Sobron, Iker; Eizmendi, Iñaki; Martins, Wallace A.; Diniz, Paulo S. R.; Ordiales, Juan Luis; Velez, Manuel
2017-01-01
Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying the use of ED-based variants. Assessing ED algorithms from theoretical and simulation viewpoints relies on several assumptions and simplifications which, eventually, lead to conclusions that do not necessarily meet the requirements imposed by real propagation environments. This work addresses those problems by dealing with practical implementation issues of adaptive least mean square (LMS)-based ED algorithms. The paper proposes a new adaptive ED algorithm that uses a variable step-size guaranteeing the LMS convergence in time-varying environments. Several implementation guidelines are provided and, additionally, an empirical assessment and validation with a software defined radio-based hardware is carried out. Experimental results show good performance in terms of probabilities of detection (Pd>0.9) and false alarm (Pf∼0.05) in a range of low signal-to-noise ratios around [-4,1] dB, in both single-node and cooperative modes. The proposed sensing methodology enables a seamless monitoring of the radio electromagnetic spectrum in order to provide band occupancy information for an efficient usage among several wireless communications systems. PMID:28441751
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
Warren, K M; Mpagazehe, J N; LeDuc, P R; Higgs, C F
2016-02-07
The response of individual cells at the micro-scale in cell mechanics is important in understanding how they are affected by changing environments. To control cell stresses, microfluidics can be implemented since there is tremendous control over the geometry of the devices. Designing microfluidic devices to induce and manipulate stress levels on biological cells can be aided by computational modeling approaches. Such approaches serve as an efficient precursor to fabricating various microfluidic geometries that induce predictable levels of stress on biological cells, based on their mechanical properties. Here, a three-dimensional, multiphase computational fluid dynamics (CFD) modeling approach was implemented for soft biological materials. The computational model incorporates the physics of the particle dynamics, fluid dynamics and solid mechanics, which allows us to study how stresses affect the cells. By using an Eulerian-Lagrangian approach to treat the fluid domain as a continuum in the microfluidics, we are conducting studies of the cells' movement and the stresses applied to the cell. As a result of our studies, we were able to determine that a channel with periodically alternating columns of obstacles was capable of stressing cells at the highest rate, and that microfluidic systems can be engineered to impose heterogenous cell stresses through geometric configuring. We found that when using controlled geometries of the microfluidics channels with staggered obstructions, we could increase the maximum cell stress by nearly 200 times over cells flowing through microfluidic channels with no obstructions. Incorporating computational modeling in the design of microfluidic configurations for controllable cell stressing could help in the design of microfludic devices for stressing cells such as cell homogenizers.
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
Emerging semantics to link phenotype and environment
Bunker, Daniel E.; Buttigieg, Pier Luigi; Cooper, Laurel D.; Dahdul, Wasila M.; Domisch, Sami; Franz, Nico M.; Jaiswal, Pankaj; Lawrence-Dill, Carolyn J.; Midford, Peter E.; Mungall, Christopher J.; Ramírez, Martín J.; Specht, Chelsea D.; Vogt, Lars; Vos, Rutger Aldo; Walls, Ramona L.; White, Jeffrey W.; Zhang, Guanyang; Deans, Andrew R.; Huala, Eva; Lewis, Suzanna E.; Mabee, Paula M.
2015-01-01
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies are well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. In this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments. PMID:26713234
Emerging semantics to link phenotype and environment.
Thessen, Anne E; Bunker, Daniel E; Buttigieg, Pier Luigi; Cooper, Laurel D; Dahdul, Wasila M; Domisch, Sami; Franz, Nico M; Jaiswal, Pankaj; Lawrence-Dill, Carolyn J; Midford, Peter E; Mungall, Christopher J; Ramírez, Martín J; Specht, Chelsea D; Vogt, Lars; Vos, Rutger Aldo; Walls, Ramona L; White, Jeffrey W; Zhang, Guanyang; Deans, Andrew R; Huala, Eva; Lewis, Suzanna E; Mabee, Paula M
2015-01-01
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies are well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. In this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.
Emerging semantics to link phenotype and environment
Thessen, Anne E.; Bunker, Daniel E.; Buttigieg, Pier Luigi; ...
2015-12-14
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies aremore » well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. Lastly, in this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.« less
Semantic integration of data on transcriptional regulation
Baitaluk, Michael; Ponomarenko, Julia
2010-01-01
Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517
Semantic integration of data on transcriptional regulation.
Baitaluk, Michael; Ponomarenko, Julia
2010-07-01
Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.
GANGA: A tool for computational-task management and easy access to Grid resources
NASA Astrophysics Data System (ADS)
Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.
2009-11-01
In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:
NASA Astrophysics Data System (ADS)
Lambrecht, L.; Lamert, A.; Friederich, W.; Möller, T.; Boxberg, M. S.
2018-03-01
A nodal discontinuous Galerkin (NDG) approach is developed and implemented for the computation of viscoelastic wavefields in complex geological media. The NDG approach combines unstructured tetrahedral meshes with an element-wise, high-order spatial interpolation of the wavefield based on Lagrange polynomials. Numerical fluxes are computed from an exact solution of the heterogeneous Riemann problem. Our implementation offers capabilities for modelling viscoelastic wave propagation in 1-D, 2-D and 3-D settings of very different spatial scale with little logistical overhead. It allows the import of external tetrahedral meshes provided by independent meshing software and can be run in a parallel computing environment. Computation of adjoint wavefields and an interface for the computation of waveform sensitivity kernels are offered. The method is validated in 2-D and 3-D by comparison to analytical solutions and results from a spectral element method. The capabilities of the NDG method are demonstrated through a 3-D example case taken from tunnel seismics which considers high-frequency elastic wave propagation around a curved underground tunnel cutting through inclined and faulted sedimentary strata. The NDG method was coded into the open-source software package NEXD and is available from GitHub.
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
Revealed Preference Methods for Studying Bicycle Route Choice-A Systematic Review.
Pritchard, Ray
2018-03-07
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments.
Revealed Preference Methods for Studying Bicycle Route Choice—A Systematic Review
2018-01-01
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments. PMID:29518991
NASA Astrophysics Data System (ADS)
Zhu, J.; Winter, C. L.; Wang, Z.
2015-11-01
Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW (Modular Three-dimensional Finite-difference Groundwater Flow Model) simulation environment, and the PEST (parameter estimation) tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop lognormally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.
A framework supporting the development of a Grid portal for analysis based on ROI.
Ichikawa, K; Date, S; Kaishima, T; Shimojo, S
2005-01-01
In our research on brain function analysis, users require two different simultaneous types of processing: interactive processing to a specific part of data and high-performance batch processing to an entire dataset. The difference between these two types of processing is in whether or not the analysis is for data in the region of interest (ROI). In this study, we propose a Grid portal that has a mechanism to freely assign computing resources to the users on a Grid environment according to the users' two different types of processing requirements. We constructed a Grid portal which integrates interactive processing and batch processing by the following two mechanisms. First, a job steering mechanism controls job execution based on user-tagged priority among organizations with heterogeneous computing resources. Interactive jobs are processed in preference to batch jobs by this mechanism. Second, a priority-based result delivery mechanism that administrates a rank of data significance. The portal ensures a turn-around time of interactive processing by the priority-based job controlling mechanism, and provides the users with quality of services (QoS) for interactive processing. The users can access the analysis results of interactive jobs in preference to the analysis results of batch jobs. The Grid portal has also achieved high-performance computation of MEG analysis with batch processing on the Grid environment. The priority-based job controlling mechanism has been realized to freely assign computing resources to the users' requirements. Furthermore the achievement of high-performance computation contributes greatly to the overall progress of brain science. The portal has thus made it possible for the users to flexibly include the large computational power in what they want to analyze.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Modeling Political Populations with Bacteria
NASA Astrophysics Data System (ADS)
Cleveland, Chris; Liao, David
2011-03-01
Results from lattice-based simulations of micro-environments with heterogeneous nutrient resources reveal that competition between wild-type and GASP rpoS819 strains of E. Coli offers mutual benefit, particularly in nutrient deprived regions. Our computational model spatially maps bacteria populations and energy sources onto a set of 3D lattices that collectively resemble the topology of North America. By implementing Wright-Fishcer re- production into a probabilistic leap-frog scheme, we observe populations of wild-type and GASP rpoS819 cells compete for resources and, yet, aid each other's long term survival. The connection to how spatial political ideologies map in a similar way is discussed.
A web based tool for storing and visualising data generated within a smart home.
McDonald, H A; Nugent, C D; Moore, G; Finlay, D D; Hallberg, J
2011-01-01
There is a growing need to re-assess the current approaches available to researchers for storing and managing heterogeneous data generated within a smart home environment. In our current work we have developed the homeML Application; a web based tool to support researchers engaged in the area of smart home research as they perform experiments. Within this paper the homeML Application is presented which includes the fundamental components of the homeML Repository and the homeML Toolkit. Results from a usability study conducted by 10 computer science researchers are presented; the initial results of which have been positive.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.
Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji
2014-01-01
Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.
Accounting for Heterogeneous-Phase Chemistry in Air Quality Models - Research Needs and Applications
Understanding the extent to which heterogeneous chemical reactions affect the burden and distribution of atmospheric pollutants is important because heterogeneous surfaces are ubiquitous throughout our environment. They include materials such as aerosol particles, clouds and fog,...
On beyond the standard model for high explosives: challenges & obstacles to surmount
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph Ds
2009-01-01
Plastic-bonded explosives (PBX) are heterogeneous materials. Nevertheless, current explosive models treat them as homogeneous materials. To compensate, an empirically determined effective burn rate is used in place of a chemical reaction rate. A significant limitation of these models is that different burn parameters are needed for applications in different regimes; for example, shock initiation of a PBX at different initial temperatures or different initial densities. This is due to temperature fluctuations generated when a heterogeneous material is shock compressed. Localized regions of high temperatures are called hot spots. They dominate the reaction for shock initiation. The understanding of hot spotmore » generation and their subsequent evolution has been limited by the inability to measure transients on small spatial ({approx} 1 {micro}m) and small temporal ({approx} 1 ns) scales in the harsh environment of a detonation. With the advances in computing power, it is natural to try and gain an understanding of hot-spot initiation with numerical experiments based on meso-scale simulations that resolve material heterogeneities and utilize realistic chemical reaction rates. However, to capture the underlying physics correctly, such high resolution simulations will require more than fast computers with a large amount of memory. Here we discuss some of the issues that need to be addressed. These include dissipative mechanisms that generate hot spots, accurate thermal propceties for the equations of state of the reactants and products, and controlling numerical entropy error from shock impedance mismatches at material interfaces. The later can generate artificial hot spots and lead to premature reaction. Eliminating numerical hot spots is critical for shock initiation simulations due to the positive feedback between the energy release from reaction and the hydrodynamic flow.« less
NASA Astrophysics Data System (ADS)
Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin
2016-06-01
CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thessen, Anne E.; Bunker, Daniel E.; Buttigieg, Pier Luigi
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies aremore » well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. Lastly, in this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.« less
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.
Experience with abstract notation one
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne
1992-01-01
This report is the mid-year report intended for the design concepts for the communication network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The overall network is to include heterogeneous computers, to use various protocols, and to have different bandwidths. Performance consideration must be given to the potential network applications in the network environment. The performance evaluation of X window applications was given the major emphasis in this report. A simulation study using Bones will be included later. This mid-year report has three parts: Part 1 is an investigation of X window traffic using TCP/IP over Ethernet networks; part 2 is a survey study of performance concepts of X window applications with Macintosh computers; and the last part is a tutorial on DECnet protocols. The results of this report should be useful in the design and operation of the ASRM communication network.
Xue, Ling; Scoglio, Caterina
2013-05-01
A wide range of infectious diseases are both vertically and horizontally transmitted. Such diseases are spatially transmitted via multiple species in heterogeneous environments, typically described by complex meta-population models. The reproduction number, R0, is a critical metric predicting whether the disease can invade the meta-population system. This paper presents the reproduction number for a generic disease vertically and horizontally transmitted among multiple species in heterogeneous networks, where nodes are locations, and links reflect outgoing or incoming movement flows. The metapopulation model for vertically and horizontally transmitted diseases is gradually formulated from two species, two-node network models. We derived an explicit expression of R0, which is the spectral radius of a matrix reduced in size with respect to the original next generation matrix. The reproduction number is shown to be a function of vertical and horizontal transmission parameters, and the lower bound is the reproduction number for horizontal transmission. As an application, the reproduction number and its bounds for the Rift Valley fever zoonosis, where livestock, mosquitoes, and humans are the involved species are derived. By computing the reproduction number for different scenarios through numerical simulations, we found the reproduction number is affected by livestock movement rates only when parameters are heterogeneous across nodes. To summarize, our study contributes the reproduction number for vertically and horizontally transmitted diseases in heterogeneous networks. This explicit expression is easily adaptable to specific infectious diseases, affording insights into disease evolution. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pruhs, Kirk
A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
Pérez-Beteta, Julián; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; López, Carlos; Martino, Juan; Velasquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Martínez-González, Alicia; Pérez-Romasanta, Luis; Arana, Estanislao; Pérez-García, Víctor M
2016-01-01
Objective: The main objective of this retrospective work was the study of three-dimensional (3D) heterogeneity measures of post-contrast pre-operative MR images acquired with T1 weighted sequences of patients with glioblastoma (GBM) as predictors of clinical outcome. Methods: 79 patients from 3 hospitals were included in the study. 16 3D textural heterogeneity measures were computed including run-length matrix (RLM) features (regional heterogeneity) and co-occurrence matrix (CM) features (local heterogeneity). The significance of the results was studied using Kaplan–Meier curves and Cox proportional hazards analysis. Correlation between the variables of the study was assessed using the Spearman's correlation coefficient. Results: Kaplan–Meyer survival analysis showed that 4 of the 11 RLM features and 4 of the 5 CM features considered were robust predictors of survival. The median survival differences in the most significant cases were of over 6 months. Conclusion: Heterogeneity measures computed on the post-contrast pre-operative T1 weighted MR images of patients with GBM are predictors of survival. Advances in knowledge: Texture analysis to assess tumour heterogeneity has been widely studied. However, most works develop a two-dimensional analysis, focusing only on one MRI slice to state tumour heterogeneity. The study of fully 3D heterogeneity textural features as predictors of clinical outcome is more robust and is not dependent on the selected slice of the tumour. PMID:27319577
Molina, David; Pérez-Beteta, Julián; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; López, Carlos; Martino, Juan; Velasquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Martínez-González, Alicia; Pérez-Romasanta, Luis; Arana, Estanislao; Pérez-García, Víctor M
2016-07-04
The main objective of this retrospective work was the study of three-dimensional (3D) heterogeneity measures of post-contrast pre-operative MR images acquired with T 1 weighted sequences of patients with glioblastoma (GBM) as predictors of clinical outcome. 79 patients from 3 hospitals were included in the study. 16 3D textural heterogeneity measures were computed including run-length matrix (RLM) features (regional heterogeneity) and co-occurrence matrix (CM) features (local heterogeneity). The significance of the results was studied using Kaplan-Meier curves and Cox proportional hazards analysis. Correlation between the variables of the study was assessed using the Spearman's correlation coefficient. Kaplan-Meyer survival analysis showed that 4 of the 11 RLM features and 4 of the 5 CM features considered were robust predictors of survival. The median survival differences in the most significant cases were of over 6 months. Heterogeneity measures computed on the post-contrast pre-operative T 1 weighted MR images of patients with GBM are predictors of survival. Texture analysis to assess tumour heterogeneity has been widely studied. However, most works develop a two-dimensional analysis, focusing only on one MRI slice to state tumour heterogeneity. The study of fully 3D heterogeneity textural features as predictors of clinical outcome is more robust and is not dependent on the selected slice of the tumour.
Mandal, Shovon; Shurin, Jonathan B.; Efroymson, Rebecca A.; ...
2018-05-23
The relationship between biodiversity and productivity has emerged as a central theme in ecology. Mechanistic explanations for this relationship suggest that the role organisms play in the ecosystem (i.e., niches or functional traits) is a better predictor of ecosystem stability and productivity than taxonomic richness. Here, we tested the capacity of functional diversity in nitrogen uptake in experimental microalgal communities to predict the complementarity effect (CE) and selection effect (SE) of biodiversity on productivity. We grew five algal species as monocultures and as polycultures in pairwise combinations in homogeneous (ammonium, nitrate, or urea alone) and heterogeneous nitrogen (mixed nitrogen) environmentsmore » to determine whether complementarity between species may be enhanced in heterogeneous environments. We show that the positive diversity effects on productivity in heterogeneous environments resulted from complementarity effects with no positive contribution by species–specific SEs. Positive biodiversity effects in homogeneous environments, when present (nitrate and urea treatments but not ammonium), were driven both by CE and SE. Our results suggest that functional diversity increases species complementarity and productivity mainly in heterogeneous resource environments. Furthermore, these results provide evidence that the positive effect of functional diversity on community productivity depends on the diversity of resources present in the environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandal, Shovon; Shurin, Jonathan B.; Efroymson, Rebecca A.
The relationship between biodiversity and productivity has emerged as a central theme in ecology. Mechanistic explanations for this relationship suggest that the role organisms play in the ecosystem (i.e., niches or functional traits) is a better predictor of ecosystem stability and productivity than taxonomic richness. Here, we tested the capacity of functional diversity in nitrogen uptake in experimental microalgal communities to predict the complementarity effect (CE) and selection effect (SE) of biodiversity on productivity. We grew five algal species as monocultures and as polycultures in pairwise combinations in homogeneous (ammonium, nitrate, or urea alone) and heterogeneous nitrogen (mixed nitrogen) environmentsmore » to determine whether complementarity between species may be enhanced in heterogeneous environments. We show that the positive diversity effects on productivity in heterogeneous environments resulted from complementarity effects with no positive contribution by species–specific SEs. Positive biodiversity effects in homogeneous environments, when present (nitrate and urea treatments but not ammonium), were driven both by CE and SE. Our results suggest that functional diversity increases species complementarity and productivity mainly in heterogeneous resource environments. Furthermore, these results provide evidence that the positive effect of functional diversity on community productivity depends on the diversity of resources present in the environment.« less
Viscosity Measurement Using Drop Coalescence in Microgravity
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin C.; Maxwell, Daniel; Curreri, Peter A. (Technical Monitor)
2002-01-01
We present in here validation studies of a new method for application in microgravity environment which measures the viscosity of highly viscous undercooled liquids using drop coalescence. The method has the advantage of avoiding heterogeneous nucleation at container walls caused by crystallization of undercooled liquids during processing. Homogeneous nucleation can also be avoided due to the rapidity of the measurement using this method. The technique relies on measurements from experiments conducted in near zero gravity environment as well as highly accurate analytical formulation for the coalescence process. The viscosity of the liquid is determined by allowing the computed free surface shape relaxation time to be adjusted in response to the measured free surface velocity for two coalescing drops. Results are presented from two sets of validation experiments for the method which were conducted on board aircraft flying parabolic trajectories. In these tests the viscosity of a highly viscous liquid, namely glycerin, was determined at different temperatures using the drop coalescence method described in here. The experiments measured the free surface velocity of two glycerin drops coalescing under the action of surface tension alone in low gravity environment using high speed photography. The liquid viscosity was determined by adjusting the computed free surface velocity values to the measured experimental data. The results of these experiments were found to agree reasonably well with the known viscosity for the test liquid used.
Uchida, Yuichiro; Masui, Toshihiko; Sato, Asahi; Nagai, Kazuyuki; Anazawa, Takayuki; Takaori, Kyoichi; Uemoto, Shinji
2018-03-27
Peripancreatic collections occur frequently after distal pancreatectomy. However, the sequelae of peripancreatic collections vary from case to case, and their clinical impact is uncertain. In this study, the correlations between CT findings of peripancreatic collections and complications after distal pancreatectomy were investigated. Ninety-six consecutive patients who had undergone distal pancreatectomy between 2010 and 2015 were retrospectively investigated. The extent and heterogeneity of peripancreatic collections and background clinicopathological characteristics were analyzed. The extent of peripancreatic collections was calculated based on three-dimensional computed tomography images, and the degree of heterogeneity of peripancreatic collections was assessed based on the standard deviation of their density on computed tomography. Of 85 patients who underwent postoperative computed tomography imaging, a peripancreatic collection was detected in 77 (91%). Patients with either a large extent or a high degree of heterogeneity of peripancreatic collection had a significantly higher rate of clinically relevant pancreatic fistula than those without (odds ratio 5.95, 95% confidence interval 2.12-19.72, p = 0.001; odds ratio 8.0, 95% confidence interval 2.87-24.19, p = 0.0001, respectively). A large and heterogeneous peripancreatic collection was significantly associated with postoperative complications, especially clinically relevant postoperative pancreatic fistula. A small and homogenous peripancreatic collection could be safely observed.
Robust mechanobiological behavior emerges in heterogeneous myosin systems.
Egan, Paul F; Moore, Jeffrey R; Ehrlicher, Allen J; Weitz, David A; Schunn, Christian; Cagan, Jonathan; LeDuc, Philip
2017-09-26
Biological complexity presents challenges for understanding natural phenomenon and engineering new technologies, particularly in systems with molecular heterogeneity. Such complexity is present in myosin motor protein systems, and computational modeling is essential for determining how collective myosin interactions produce emergent system behavior. We develop a computational approach for altering myosin isoform parameters and their collective organization, and support predictions with in vitro experiments of motility assays with α-actinins as molecular force sensors. The computational approach models variations in single myosin molecular structure, system organization, and force stimuli to predict system behavior for filament velocity, energy consumption, and robustness. Robustness is the range of forces where a filament is expected to have continuous velocity and depends on used myosin system energy. Myosin systems are shown to have highly nonlinear behavior across force conditions that may be exploited at a systems level by combining slow and fast myosin isoforms heterogeneously. Results suggest some heterogeneous systems have lower energy use near stall conditions and greater energy consumption when unloaded, therefore promoting robustness. These heterogeneous system capabilities are unique in comparison with homogenous systems and potentially advantageous for high performance bionanotechnologies. Findings open doors at the intersections of mechanics and biology, particularly for understanding and treating myosin-related diseases and developing approaches for motor molecule-based technologies.
Robust mechanobiological behavior emerges in heterogeneous myosin systems
NASA Astrophysics Data System (ADS)
Egan, Paul F.; Moore, Jeffrey R.; Ehrlicher, Allen J.; Weitz, David A.; Schunn, Christian; Cagan, Jonathan; LeDuc, Philip
2017-09-01
Biological complexity presents challenges for understanding natural phenomenon and engineering new technologies, particularly in systems with molecular heterogeneity. Such complexity is present in myosin motor protein systems, and computational modeling is essential for determining how collective myosin interactions produce emergent system behavior. We develop a computational approach for altering myosin isoform parameters and their collective organization, and support predictions with in vitro experiments of motility assays with α-actinins as molecular force sensors. The computational approach models variations in single myosin molecular structure, system organization, and force stimuli to predict system behavior for filament velocity, energy consumption, and robustness. Robustness is the range of forces where a filament is expected to have continuous velocity and depends on used myosin system energy. Myosin systems are shown to have highly nonlinear behavior across force conditions that may be exploited at a systems level by combining slow and fast myosin isoforms heterogeneously. Results suggest some heterogeneous systems have lower energy use near stall conditions and greater energy consumption when unloaded, therefore promoting robustness. These heterogeneous system capabilities are unique in comparison with homogenous systems and potentially advantageous for high performance bionanotechnologies. Findings open doors at the intersections of mechanics and biology, particularly for understanding and treating myosin-related diseases and developing approaches for motor molecule-based technologies.
NASA Astrophysics Data System (ADS)
Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.
2013-10-01
In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Wang, Yong-Jian; Müller-Schärer, Heinz; van Kleunen, Mark; Cai, Ai-Ming; Zhang, Ping; Yan, Rong; Dong, Bi-Cheng; Yu, Fei-Hai
2017-12-01
What confers invasive alien plants a competitive advantage over native plants remains open to debate. Many of the world's worst invasive alien plants are clonal and able to share resources within clones (clonal integration), particularly in heterogeneous environments. Here, we tested the hypothesis that clonal integration benefits invasive clonal plants more than natives and thus confers invasives a competitive advantage. We selected five congeneric and naturally co-occurring pairs of invasive alien and native clonal plants in China, and grew pairs of connected and disconnected ramets under heterogeneous light, soil nutrient and water conditions that are commonly encountered by alien plants during their invasion into new areas. Clonal integration increased biomass of all plants in all three heterogeneous resource environments. However, invasive plants benefited more from clonal integration than natives. Consequently, invasive plants produced more biomass than natives. Our results indicate that clonal integration may confer invasive alien clonal plants a competitive advantage over natives. Therefore, differences in the ability of clonal integration could potentially explain, at least partly, the invasion success of alien clonal plants in areas where resources are heterogeneously distributed. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.
2013-01-01
The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276
Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions
Kanjirathumkal, Cibile K.; Mohammed, Sameer S.
2014-01-01
Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175
OpenID Connect as a security service in cloud-based medical imaging systems.
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-04-01
The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.
Distributed computations in a dynamic, heterogeneous Grid environment
NASA Astrophysics Data System (ADS)
Dramlitsch, Thomas
2003-06-01
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus. Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen. Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen. Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
Dunlop, R; Arbona, A; Rajasekaran, H; Lo Iacono, L; Fingberg, J; Summers, P; Benkner, S; Engelbrecht, G; Chiarini, A; Friedrich, C M; Moore, B; Bijlenga, P; Iavindrasana, J; Hose, R D; Frangi, A F
2008-01-01
This paper presents an overview of computerised decision support for clinical practice. The concept of computer-interpretable guidelines is introduced in the context of the @neurIST project, which aims at supporting the research and treatment of asymptomatic unruptured cerebral aneurysms by bringing together heterogeneous data, computing and complex processing services. The architecture is generic enough to adapt it to the treatment of other diseases beyond cerebral aneurysms. The paper reviews the generic requirements of the @neurIST system and presents the innovative work in distributing executable clinical guidelines.
NASA Astrophysics Data System (ADS)
Kort-Kamp, W. J. M.; Cordes, N. L.; Ionita, A.; Glover, B. B.; Duque, A. L. Higginbotham; Perry, W. L.; Patterson, B. M.; Dalvit, D. A. R.; Moore, D. S.
2016-04-01
Electromagnetic stimulation of energetic materials provides a noninvasive and nondestructive tool for detecting and identifying explosives. We combine structural information based on x-ray computed tomography, experimental dielectric data, and electromagnetic full-wave simulations to study microscale electromagnetic heating of realistic three-dimensional heterogeneous explosives. We analyze the formation of electromagnetic hot spots and thermal gradients in the explosive-binder mesostructures and compare the heating rate for various binder systems.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
Perpetuation of torsade de pointes in heterogeneous hearts: competing foci or re-entry?
Vandersickel, Nele; de Boer, Teun P; Vos, Marc A; Panfilov, Alexander V
2016-12-01
The underlying mechanism of torsade de pointes (TdP) remains of debate: perpetuation may be due to (1) focal activity or (2) re-entrant activity. The onset of TdP correlates with action potential heterogeneities in different regions of the heart. We studied the mechanism of perpetuation of TdP in silico using a 2D model of human cardiac tissue and an anatomically accurate model of the ventricles of the human heart. We found that the mechanism of perpetuation TdP depends on the degree of heterogeneity. If the degree of heterogeneity is large, focal activity alone can sustain a TdP, otherwise re-entrant activity emerges. This result can help to understand the relationship between the mechanisms of TdP and tissue properties and may help in developing new drugs against it. Torsade de pointes (TdP) can be the consequence of cardiac remodelling, drug effects or a combination of both. The mechanism underlying TdP is unclear, and may involve triggered focal activity or re-entry. Recent work by our group has indicated that both cases may exist, i.e. TdPs induced in the chronic atrioventricular block (CAVB) dog model may have a focal origin or are due to re-entry. Also it was found that heterogeneities might play an important role. In the current study we have used computational modelling to further investigate the mechanisms involved in TdP initiation and perpetuation, especially in the CAVB dog model, by the addition of heterogeneities with reduced repolarization reserve in comparison with the surrounding tissue. For this, the TNNP computer model was used for computations. We demonstrated in 2D and 3D simulations that ECGs with the typical TdP morphology can be caused by both multiple competing foci and re-entry circuits as a result of introduction of heterogeneities, depending on whether the heterogeneities have a large or a smaller reduced repolarization reserve in comparison with the surrounding tissue. Large heterogeneities can produce ectopic TdP, while smaller heterogeneities will produce re-entry-type TdP. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Perfusion kinetics in human brain tumor with DCE-MRI derived model and CFD analysis.
Bhandari, A; Bansal, A; Singh, A; Sinha, N
2017-07-05
Cancer is one of the leading causes of death all over the world. Among the strategies that are used for cancer treatment, the effectiveness of chemotherapy is often hindered by factors such as irregular and non-uniform uptake of drugs inside tumor. Thus, accurate prediction of drug transport and deposition inside tumor is crucial for increasing the effectiveness of chemotherapeutic treatment. In this study, a computational model of human brain tumor is developed that incorporates dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) data into a voxelized porous media model. The model takes into account realistic transport and perfusion kinetics parameters together with realistic heterogeneous tumor vasculature and accurate arterial input function (AIF), which makes it patient specific. The computational results for interstitial fluid pressure (IFP), interstitial fluid velocity (IFV) and tracer concentration show good agreement with the experimental results. The computational model can be extended further for predicting the deposition of chemotherapeutic drugs in tumor environment as well as selection of the best chemotherapeutic drug for a specific patient. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application-oriented offloading in heterogeneous networks for mobile cloud computing
NASA Astrophysics Data System (ADS)
Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.
2018-04-01
Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.
Jambusaria, Ankit; Klomp, Jeff; Hong, Zhigang; Rafii, Shahin; Dai, Yang; Malik, Asrar B; Rehman, Jalees
2018-06-07
The heterogeneity of cells across tissue types represents a major challenge for studying biological mechanisms as well as for therapeutic targeting of distinct tissues. Computational prediction of tissue-specific gene regulatory networks may provide important insights into the mechanisms underlying the cellular heterogeneity of cells in distinct organs and tissues. Using three pathway analysis techniques, gene set enrichment analysis (GSEA), parametric analysis of gene set enrichment (PGSEA), alongside our novel model (HeteroPath), which assesses heterogeneously upregulated and downregulated genes within the context of pathways, we generated distinct tissue-specific gene regulatory networks. We analyzed gene expression data derived from freshly isolated heart, brain, and lung endothelial cells and populations of neurons in the hippocampus, cingulate cortex, and amygdala. In both datasets, we found that HeteroPath segregated the distinct cellular populations by identifying regulatory pathways that were not identified by GSEA or PGSEA. Using simulated datasets, HeteroPath demonstrated robustness that was comparable to what was seen using existing gene set enrichment methods. Furthermore, we generated tissue-specific gene regulatory networks involved in vascular heterogeneity and neuronal heterogeneity by performing motif enrichment of the heterogeneous genes identified by HeteroPath and linking the enriched motifs to regulatory transcription factors in the ENCODE database. HeteroPath assesses contextual bidirectional gene expression within pathways and thus allows for transcriptomic assessment of cellular heterogeneity. Unraveling tissue-specific heterogeneity of gene expression can lead to a better understanding of the molecular underpinnings of tissue-specific phenotypes.
A Cloud-Based X73 Ubiquitous Mobile Healthcare System: Design and Implementation
Ji, Zhanlin; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji
2014-01-01
Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed “big data” processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems. PMID:24737958
Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong
2015-01-01
In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network’s running and the degree of candidate nodes’ effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime. PMID:26690440
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill
2000-01-01
We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.
de Vargas Roditi, Laura; Claassen, Manfred
2015-08-01
Novel technological developments enable single cell population profiling with respect to their spatial and molecular setup. These include single cell sequencing, flow cytometry and multiparametric imaging approaches and open unprecedented possibilities to learn about the heterogeneity, dynamics and interplay of the different cell types which constitute tissues and multicellular organisms. Statistical and dynamic systems theory approaches have been applied to quantitatively describe a variety of cellular processes, such as transcription and cell signaling. Machine learning approaches have been developed to define cell types, their mutual relationships, and differentiation hierarchies shaping heterogeneous cell populations, yielding insights into topics such as, for example, immune cell differentiation and tumor cell type composition. This combination of experimental and computational advances has opened perspectives towards learning predictive multi-scale models of heterogeneous cell populations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantum Heterogeneous Computing for Satellite Positioning Optimization
NASA Astrophysics Data System (ADS)
Bass, G.; Kumar, V.; Dulny, J., III
2016-12-01
Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.
NASA Technical Reports Server (NTRS)
Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.
2003-01-01
The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.
Kort-Kamp, W. J. M.; Cordes, N. L.; Ionita, A.; ...
2016-04-01
Electromagnetic stimulation of energetic materials provides a noninvasive and nondestructive tool for detecting and identifying explosives. We combine structural information based on x-ray computed tomography, experimental dielectric data, and electromagnetic full-wave simulations to study microscale electromagnetic heating of realistic three-dimensional heterogeneous explosives. In conclusion, we analyze the formation of electromagnetic hot spots and thermal gradients in the explosive-binder mesostructures and compare the heating rate for various binder systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kort-Kamp, W. J. M.; Cordes, N. L.; Ionita, A.
Electromagnetic stimulation of energetic materials provides a noninvasive and nondestructive tool for detecting and identifying explosives. We combine structural information based on x-ray computed tomography, experimental dielectric data, and electromagnetic full-wave simulations to study microscale electromagnetic heating of realistic three-dimensional heterogeneous explosives. In conclusion, we analyze the formation of electromagnetic hot spots and thermal gradients in the explosive-binder mesostructures and compare the heating rate for various binder systems.
ERIC Educational Resources Information Center
Kobayashi, Tetsuro
2010-01-01
This article examines the democratic potential of online communities by investigating the influence of network heterogeneity on social tolerance in an online gaming environment. Online game communities are potential sources of bridging social capital because they tend to be relatively heterogeneous. Causal analyses are conducted using structural…
Leu, Jenq-Shiou; Lin, Wei-Hsiang; Hsieh, Wen-Bin; Lo, Chien-Chih
2014-01-01
As the digitization is integrated into daily life, media including video and audio are heavily transferred over the Internet nowadays. Voice-over-Internet Protocol (VoIP), the most popular and mature technology, becomes the focus attracting many researches and investments. However, most of the existing studies focused on a one-to-one communication model in a homogeneous network, instead of one-to-many broadcasting model among diverse embedded devices in a heterogeneous network. In this paper, we present the implementation of a VoIP broadcasting service on the open source-Linphone-in a heterogeneous network environment, including WiFi, 3G, and LAN networks. The proposed system featuring VoIP broadcasting over heterogeneous networks can be integrated with heterogeneous agile devices, such as embedded devices or mobile phones. VoIP broadcasting over heterogeneous networks can be integrated into modern smartphones or other embedded devices; thus when users run in a traditional AM/FM signal unreachable area, they still can receive the broadcast voice through the IP network. Also, comprehensive evaluations are conducted to verify the effectiveness of the proposed implementation.
Lin, Wei-Hsiang; Hsieh, Wen-Bin; Lo, Chien-Chih
2014-01-01
As the digitization is integrated into daily life, media including video and audio are heavily transferred over the Internet nowadays. Voice-over-Internet Protocol (VoIP), the most popular and mature technology, becomes the focus attracting many researches and investments. However, most of the existing studies focused on a one-to-one communication model in a homogeneous network, instead of one-to-many broadcasting model among diverse embedded devices in a heterogeneous network. In this paper, we present the implementation of a VoIP broadcasting service on the open source—Linphone—in a heterogeneous network environment, including WiFi, 3G, and LAN networks. The proposed system featuring VoIP broadcasting over heterogeneous networks can be integrated with heterogeneous agile devices, such as embedded devices or mobile phones. VoIP broadcasting over heterogeneous networks can be integrated into modern smartphones or other embedded devices; thus when users run in a traditional AM/FM signal unreachable area, they still can receive the broadcast voice through the IP network. Also, comprehensive evaluations are conducted to verify the effectiveness of the proposed implementation. PMID:25300280
Alexandrov, Boian S.; Phipps, M. Lisa; Alexandrov, Ludmil B.; ...
2013-01-31
In this paper, we report that terahertz (THz) irradiation of mouse mesenchymal stem cells (mMSCs) with a single-frequency (SF) 2.52 THz laser or pulsed broadband (centered at 10 THz) source results in irradiation specific heterogenic changes in gene expression. The THz effect depends on irradiation parameters such as the duration and type of THz source, and on the degree of stem cell differentiation. Our microarray survey and RT-PCR experiments demonstrate that prolonged broadband THz irradiation drives mMSCs toward differentiation, while 2-hour irradiation (regardless of THz sources) affects genes transcriptionally active in pluripotent stem cells. The strictly controlled experimental environment indicatesmore » minimal temperature changes and the absence of any discernable response to heat shock and cellular stress genes imply a non-thermal response. Computer simulations of the core promoters of two pluripotency markers reveal association between gene upregulation and propensity for DNA breathing. Finally, we propose that THz radiation has potential for non-contact control of cellular gene expression.« less
Radiation detection and situation management by distributed sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Frigo; Mielke, Angela; Cai, D Michael
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from different radiation detectorsmore » are combined to improve the detection confidence. In addition, the DSN exploits other sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class (e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node. Relevant information from each node is sent to a base station computer which is used to assess the movement of radioactive materials.« less
NASA Astrophysics Data System (ADS)
Gruber, S.; Fiddes, J.
2013-12-01
In mountainous topography, the difference in scale between atmospheric reanalyses (typically tens of kilometres) and relevant processes and phenomena near the Earth surface, such as permafrost or snow cover (meters to tens of meters) is most obvious. This contrast of scales is one of the major obstacles to using reanalysis data for the simulation of surface phenomena and to confronting reanalyses with independent observation. At the example of modelling permafrost in mountain areas (but simple to generalise to other phenomena and heterogeneous environments), we present and test methods against measurements for (A) scaling atmospheric data from the reanalysis to the ground level and (B) smart sampling of the heterogeneous landscape in order to set up a lumped model simulation that represents the high-resolution land surface. TopoSCALE (Part A, see http://dx.doi.org/10.5194/gmdd-6-3381-2013) is a scheme, which scales coarse-grid climate fields to fine-grid topography using pressure level data. In addition, it applies necessary topographic corrections e.g. those variables required for computation of radiation fields. This provides the necessary driving fields to the LSM. Tested against independent ground data, this scheme has been shown to improve the scaling and distribution of meteorological parameters in complex terrain, as compared to conventional methods, e.g. lapse rate based approaches. TopoSUB (Part B, see http://dx.doi.org/10.5194/gmd-5-1245-2012) is a surface pre-processor designed to sample a fine-grid domain (defined by a digital elevation model) along important topographical (or other) dimensions through a clustering scheme. This allows constructing a lumped model representing the main sources of fine-grid variability and applying a 1D LSM efficiently over large areas. Results can processed to derive (i) summary statistics at coarse-scale re-analysis grid resolution, (ii) high-resolution data fields spatialized to e.g., the fine-scale digital elevation model grid, or (iii) validation products for locations at which measurements exist, only. The ability of TopoSUB to approximate results simulated by a 2D distributed numerical LSM at a factor of ~10,000 less computations is demonstrated by comparison of 2D and lumped simulations. Successful application of the combined scheme in the European Alps is reported and based on its results, open issues for future research are outlined.
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Revisiting the Stability of Spatially Heterogeneous Predator-Prey Systems Under Eutrophication.
Farkas, J Z; Morozov, A Yu; Arashkevich, E G; Nikishina, A
2015-10-01
We employ partial integro-differential equations to model trophic interaction in a spatially extended heterogeneous environment. Compared to classical reaction-diffusion models, this framework allows us to more realistically describe the situation where movement of individuals occurs on a faster time scale than on the demographic (population) time scale, and we cannot determine population growth based on local density. However, most of the results reported so far for such systems have only been verified numerically and for a particular choice of model functions, which obviously casts doubts about these findings. In this paper, we analyse a class of integro-differential predator-prey models with a highly mobile predator in a heterogeneous environment, and we reveal the main factors stabilizing such systems. In particular, we explore an ecologically relevant case of interactions in a highly eutrophic environment, where the prey carrying capacity can be formally set to 'infinity'. We investigate two main scenarios: (1) the spatial gradient of the growth rate is due to abiotic factors only, and (2) the local growth rate depends on the global density distribution across the environment (e.g. due to non-local self-shading). For an arbitrary spatial gradient of the prey growth rate, we analytically investigate the possibility of the predator-prey equilibrium in such systems and we explore the conditions of stability of this equilibrium. In particular, we demonstrate that for a Holling type I (linear) functional response, the predator can stabilize the system at low prey density even for an 'unlimited' carrying capacity. We conclude that the interplay between spatial heterogeneity in the prey growth and fast displacement of the predator across the habitat works as an efficient stabilizing mechanism. These results highlight the generality of the stabilization mechanisms we find in spatially structured predator-prey ecological systems in a heterogeneous environment.
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
Jorgensen, Tove H
2012-03-01
The biotic and abiotic environment of interacting hosts and parasites may vary considerably over small spatial and temporal scales. It is essential to understand how different environments affect host disease resistance because this determines frequency of disease and, importantly, heterogeneous environments can retard direct selection and potentially maintain genetic variation for resistance in natural populations. The effect of different temperatures and soil nutrient conditions on the outcome of infection by a pathogen was quantified in Arabidopsis thaliana. Expression levels of a gene conferring resistance to powdery mildews, RPW8, were compared with levels of disease to test a possible mechanism behind variation in resistance. Most host genotypes changed from susceptible to resistant across environments with the ranking of genotypes differing between treatments. Transcription levels of RPW8 increased after infection and varied between environments, but there was no tight association between transcription and resistance levels. There is a strong potential for a heterogeneous environment to change the resistance capacity of A. thaliana genotypes and hence the direction and magnitude of selection in the presence of the pathogen. Possible causative links between resistance gene expression and disease resistance are discussed in light of the present results on RPW8.
Eye-based Direct Interaction for Environmental Control in Heterogeneous Smart Environments
NASA Astrophysics Data System (ADS)
Corno, Fulvio; Gale, Alastair; Majaranta, Päivi; Räihä, Kari-Jouko
environmental control is the control, operation, and monitoring of an environment via intermediary technology such as a computer. Typically this means control of a domestic home.Within the scope of COGAIN, this environmental control concerns the control of the personal environment of a person (with or without a disability). This defines environmental control as the control of a home or domestic setting and those objects that are within that setting. Thus, we may say that environmental control systems enable anyone to operate a wide range of domestic appliances and other vital functions in the home by remote control. In recent years the problem of self-sufficiency for older people and people with a disability has attracted increasing attention and resources. The search for new solutions that can guarantee greater autonomy and a better quality of life has begun to exploit easily available state-of-the-art technology. Personal environmental control can be considered to be a comprehensive and effective aid, adaptable to the functional possibilities of the user and to their desired actions.
NASA Astrophysics Data System (ADS)
Lacharité, Myriam; Metaxas, Anna
2017-08-01
Benthic habitats on deep continental margins (> 1000 m) are now considered heterogeneous - in particular because of the occasional presence of hard substrate in a matrix of sand and mud - influencing the distribution of megafauna which can thrive on both sedimented and rocky substrates. At these depths, optical imagery captured with high-definition cameras to describe megafauna can also describe effectively the fine-scale sediment properties in the immediate vicinity of the fauna. In this study, we determined the relationship between local heterogeneity (10-100 sm) in fine-scale sediment properties and the abundance, composition, and diversity of megafauna along a large depth gradient (1000-3000 m) in a previously-unexplored habitat: the Northeast Fan, which lies downslope of submarine canyons off the Gulf of Maine (northwest Atlantic). Substrate heterogeneity was quantified using a novel approach based on principles of computer vision. This approach proved powerful in detecting gradients in sediment, and sporadic complex features (i.e. large boulders) in an otherwise homogeneous environment because it characterizes sediment properties on a continuous scale. Sediment heterogeneity influenced megafaunal diversity (morphospecies richness and Shannon-Wiener Index) and community composition, with areas of higher substrate complexity generally supported higher diversity. However, patterns in abundance were not influenced by sediment properties, and may be best explained by gradients in food supply. Our study provides a new approach to quantify fine-scale sediment properties and assess their role in shaping megafaunal communities in the deep sea, which should be included into habitat studies given their potential ecological importance.
Is the whole the sum of its parts? Agent-based modelling of wastewater treatment systems.
Schuler, A J; Majed, N; Bucci, V; Hellweger, F L; Tu, Y; Gu, A Z
2011-01-01
Agent-based models (ABMS) simulate individual units within a system, such as the bacteria in a biological wastewater treatment system. This paper outlines past, current and potential future applications of ABMs to wastewater treatment. ABMs track heterogeneities within microbial populations, and this has been demonstrated to yield different predictions of bulk behaviors than the conventional, "lumped" approaches for enhanced biological phosphorus removal (EBPR) completely mixed reactors systems. Current work included the application of the ABM approach to bacterial adaptation/evolution, using the model system of individual EBPR bacteria that are allowed to evolve a kinetic parameter (maximum glycogen storage) in a competitive environment. The ABM approach was successfully implemented to a simple anaerobic-aerobic system and it was found the differing initial states converged to the same optimal solution under uncertain hydraulic residence times associated with completely mixed hydraulics. In another study, an ABM was developed and applied to simulate the heterogeneity in intracellular polymer storage compounds, including polyphosphate (PP), in functional microbial populations in enhanced biological phosphorus removal (EBPR) process. The simulation results were compared to the experimental measurements of single-cell abundance of PP in polyphosphate accumulating organisms (PAOs), performed using Raman microscopy. The model-predicted heterogeneity was generally consistent with observations, and it was used to investigate the relative contribution of external (different life histories) and internal (biological) mechanisms leading to heterogeneity. In the future, ABMs could be combined with computational fluid dynamics (CFD) models to understand incomplete mixing, more intracellular states and mechanisms can be incorporated, and additional experimental verification is needed.
PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes
NASA Astrophysics Data System (ADS)
Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.
2017-12-01
Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
NASA Astrophysics Data System (ADS)
McClure, J. E.; Prins, J. F.; Miller, C. T.
2014-07-01
Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing
NASA Astrophysics Data System (ADS)
Shi, X.
2017-10-01
Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.
NASA Astrophysics Data System (ADS)
Singla, Tanu; Chandrasekhar, E.; Singh, B. P.; Parmananda, P.
2014-12-01
Complete and anticipation synchronization of nonlinear oscillators from different origins is attempted experimentally. This involves coupling these heterogeneous oscillators to a common dynamical environment. Initially, this phenomenon was studied using two parameter mismatched Chua circuits. Subsequently, three different timeseries: a) x variable of the Lorenz oscillator b) the X-component of Earth's magnetic field and c) per-day temperature variation of the Region Santa Cruz in Mumbai, India are environmentally coupled, under the master-slave scenario, with a Chua circuit. Our results indicate that environmental coupling is a potent tool to provoke complete and anticipation synchronization of heterogeneous oscillators from distinct origins.
Automated inverse computer modeling of borehole flow data in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Sawdey, J. R.; Reeve, A. S.
2012-09-01
A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.
High-performance computing on GPUs for resistivity logging of oil and gas wells
NASA Astrophysics Data System (ADS)
Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.
2017-10-01
We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.
Dual compile strategy for parallel heterogeneous execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Tyler Barratt; Perry, James Thomas
2012-06-01
The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less
Analytical effective tensor for flow-through composites
Sviercoski, Rosangela De Fatima [Los Alamos, NM
2012-06-19
A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.
Moro, Marcelo Freire; Silva, Igor Aurélio; de Araújo, Francisca Soares; Nic Lughadha, Eimear; Meagher, Thomas R.; Martins, Fernando Roberto
2015-01-01
Seasonally dry tropical plant formations (SDTF) are likely to exhibit phylogenetic clustering owing to niche conservatism driven by a strong environmental filter (water stress), but heterogeneous edaphic environments and life histories may result in heterogeneity in degree of phylogenetic clustering. We investigated phylogenetic patterns across ecological gradients related to water availability (edaphic environment and climate) in the Caatinga, a SDTF in Brazil. Caatinga is characterized by semiarid climate and three distinct edaphic environments – sedimentary, crystalline, and inselberg –representing a decreasing gradient in soil water availability. We used two measures of phylogenetic diversity: Net Relatedness Index based on the entire phylogeny among species present in a site, reflecting long-term diversification; and Nearest Taxon Index based on the tips of the phylogeny, reflecting more recent diversification. We also evaluated woody species in contrast to herbaceous species. The main climatic variable influencing phylogenetic pattern was precipitation in the driest quarter, particularly for herbaceous species, suggesting that environmental filtering related to minimal periods of precipitation is an important driver of Caatinga biodiversity, as one might expect for a SDTF. Woody species tended to show phylogenetic clustering whereas herbaceous species tended towards phylogenetic overdispersion. We also found phylogenetic clustering in two edaphic environments (sedimentary and crystalline) in contrast to phylogenetic overdispersion in the third (inselberg). We conclude that while niche conservatism is evident in phylogenetic clustering in the Caatinga, this is not a universal pattern likely due to heterogeneity in the degree of realized environmental filtering across edaphic environments. Thus, SDTF, in spite of a strong shared environmental filter, are potentially heterogeneous in phylogenetic structuring. Our results support the need for scientifically informed conservation strategies in the Caatinga and other SDTF regions that have not previously been prioritized for conservation in order to take into account this heterogeneity. PMID:25798584
A collaborative computer auditing system under SOA-based conceptual model
NASA Astrophysics Data System (ADS)
Cong, Qiushi; Huang, Zuoming; Hu, Jibing
2013-03-01
Some of the current challenges of computer auditing are the obstacles to retrieving, converting and translating data from different database schema. During the last few years, there are many data exchange standards under continuous development such as Extensible Business Reporting Language (XBRL). These XML document standards can be used for data exchange among companies, financial institutions, and audit firms. However, for many companies, it is still expensive and time-consuming to translate and provide XML messages with commercial application packages, because it is complicated and laborious to search and transform data from thousands of tables in the ERP databases. How to transfer transaction documents for supporting continuous auditing or real time auditing between audit firms and their client companies is a important topic. In this paper, a collaborative computer auditing system under SOA-based conceptual model is proposed. By utilizing the widely used XML document standards and existing data transformation applications developed by different companies and software venders, we can wrap these application as commercial web services that will be easy implemented under the forthcoming application environments: service-oriented architecture (SOA). Under the SOA environments, the multiagency mechanism will help the maturity and popularity of data assurance service over the Internet. By the wrapping of data transformation components with heterogeneous databases or platforms, it will create new component markets composed by many software vendors and assurance service companies to provide data assurance services for audit firms, regulators or third parties.
HCI∧2 framework: a software framework for multimodal human-computer interaction systems.
Shen, Jie; Pantic, Maja
2013-12-01
This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).
The effect of material heterogeneities in long term multiscale seismic cycle simulations
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Richards-Dinger, K. B.; Dieterich, J. H.
2016-12-01
A fundamental part of the simulation of the earthquake cycles in large-scale multicycle earthquake simulators is the pre-computation of elastostatic Greens functions collected into the stiffness matrix (K). The stiffness matrices are typically based on the elastostatic solutions of Okada (1992), Gimbutas et al. (2012), or similar. While these analytic solutions are computationally very fast, they are limited to modeling a homogeneous isotropic half-space. It is thus unknown how such simulations may be affected by material heterogeneity characterizing the earth medium. We are currently working on the estimation of the effects of heterogeneous material properties in the earthquake simulator RSQSim (Richards-Dinger and Dieterich, 2012). In order to do that we are calculating elastostatic solutions in a heterogeneous medium using the Finite Element (FE) method instead of any of the analytical solutions. The investigated region is a 400 x 400 km area centered on the Anza zone in southern California. The fault system geometry is based on that of the UCERF3 deformation models in the area of interest, which we then implement in a finite element mesh using Trelis 15. The heterogeneous elastic structure is based on available tomographic data (seismic wavespeeds and density) for the region (SCEC CVM and Allam et al., 2014). For computation of the Greens functions we are using the open source FE code Defmod (https://bitbucket.org/stali/defmod/wiki/Home) to calculate the elastostatic solutions due to unit slip on each patch. Earthquake slip on the fault plane is implemented through linear constraint equations (Ali et al., 2014, Kyriakopoulos et al., 2013, Aagard et al, 2015) and more specifically with the use of Lagrange multipliers adjunction. The elementary responses are collected into the "heterogeneous" stiffness matrix Khet and used in RSQSim instead of the ones generated with Okada. Finally, we compare the RSQSim results based on the "heterogeneous" Khet with results from Khom (stiffness matrix generated from the same mesh as Khet but using homogeneous material properties). The estimation of the effect of heterogeneous material properties in the seismic cycles simulated by RSQSim is a needed experiment that will allow us to evaluate the impact of heterogeneities in earthquake simulators.
Soon, Ing Shian; Molodecky, Natalie A; Rabi, Doreen M; Ghali, William A; Barkema, Herman W; Kaplan, Gilaad G
2012-05-24
The objective of this study was to conduct a systematic review with meta-analysis of studies assessing the association between living in an urban environment and the development of the Crohn's disease (CD) or ulcerative colitis (UC). A systematic literature search of MEDLINE (1950-Oct. 2009) and EMBASE (1980-Oct. 2009) was conducted to identify studies investigating the relationship between urban environment and IBD. Cohort and case-control studies were analyzed using incidence rate ratio (IRR) or odds ratio (OR) with 95 % confidence intervals (CIs), respectively. Stratified and sensitivity analyses were performed to explore heterogeneity between studies and assess effects of study quality. The search strategy retrieved 6940 unique citations and 40 studies were selected for inclusion. Of these, 25 investigated the relationship between urban environment and UC and 30 investigated this relationship with CD. Included in our analysis were 7 case-control UC studies, 9 case-control CD studies, 18 cohort UC studies and 21 cohort CD studies. Based on a random effects model, the pooled IRRs for urban compared to rural environment for UC and CD studies were 1.17 (1.03, 1.32) and 1.42 (1.26, 1.60), respectively. These associations persisted across multiple stratified and sensitivity analyses exploring clinical and study quality factors. Heterogeneity was observed in the cohort studies for both UC and CD, whereas statistically significant heterogeneity was not observed for the case-control studies. A positive association between urban environment and both CD and UC was found. Heterogeneity may be explained by differences in study design and quality factors.
ERIC Educational Resources Information Center
Peterson, N. Andrew; Farmer, Antoinette Y.; Donnelly, Louis; Forenza, Brad
2014-01-01
The implicit curriculum, which refers to a student's learning environment, has been described as an essential feature of an integrated professional social work curriculum. Very little is known, however, about the heterogeneity of students' experiences with the implicit curriculum, how this heterogeneity may be distributed across groups of…
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk
2017-09-01
Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Moving alcohol prevention research forward-Part I: introducing a complex systems paradigm.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
The drinking environment is a complex system consisting of a number of heterogeneous, evolving and interacting components, which exhibit circular causality and emergent properties. These characteristics reduce the efficacy of commonly used research approaches, which typically do not account for the underlying dynamic complexity of alcohol consumption and the interdependent nature of diverse factors influencing misuse over time. We use alcohol misuse among college students in the United States as an example for framing our argument for a complex systems paradigm. A complex systems paradigm, grounded in socio-ecological and complex systems theories and computational modeling and simulation, is introduced. Theoretical, conceptual, methodological and analytical underpinnings of this paradigm are described in the context of college drinking prevention research. The proposed complex systems paradigm can transcend limitations of traditional approaches, thereby fostering new directions in alcohol prevention research. By conceptualizing student alcohol misuse as a complex adaptive system, computational modeling and simulation methodologies and analytical techniques can be used. Moreover, use of participatory model-building approaches to generate simulation models can further increase stakeholder buy-in, understanding and policymaking. A complex systems paradigm for research into alcohol misuse can provide a holistic understanding of the underlying drinking environment and its long-term trajectory, which can elucidate high-leverage preventive interventions. © 2017 Society for the Study of Addiction.
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
Tolsma, J; van der Meer, T W G
2017-01-01
The constrict claim that ethnic heterogeneity drives down social trust has been empirically tested across the globe. Meta-analyses suggest that neighbourhood ethnic heterogeneity generally undermines ties within the neighbourhood (such as trust in neighbours), but concurrently has an inconsistent or even positive effect on interethnic ties (such as outgroup trust). While the composition of the living environment thus often seems to matter, when and where remain unclear. We contribute to the literature by: (1) scrutinizing the extent to which ethnic heterogeneity drives down trust in coethnic neighbours, non-coethnic neighbours, unknown neighbours and unknown non-neighbours similarly; (2) comparing effects of heterogeneity aggregated to geographical areas that vary in scale and type of boundary; and (3) assessing whether the impact of heterogeneity of the local area depends on the wider geographic context. We test our hypotheses on the Religion in Dutch Society 2011-2012 dataset, supplemented with uniquely detailed GIS-data of Statistics Netherlands. Our dependent variables are four different so-called wallet-items, which we model through spatial and multilevel regression techniques. We demonstrate that both trust in non-coethnic and coethnic neighbours are lower in heterogeneous environments. Trust in people outside the neighbourhood is not affected by local heterogeneity. Measures of heterogeneity aggregated to relatively large scales, such as, administrative municipalities and egohoods with a 4000 m radius, demonstrate the strongest negative relationships with our trust indicators.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Uranium (VI) transport in saturated heterogeneous media: Influence of kaolinite and humic acid.
Chen, Chong; Zhao, Kang; Shang, Jianying; Liu, Chongxuan; Wang, Jin; Yan, Zhifeng; Liu, Kesi; Wu, Wenliang
2018-05-07
Natural aquifers typically exhibit a variety of structural heterogeneities. However, the effect of mineral colloids and natural organic matter on the transport behavior of uranium (U) in saturated heterogeneous media are not totally understood. In this study, heterogeneous column experiments were conducted, and the constructed columns contained a fast-flow domain (FFD) and a slow-flow domain (SFD). The effect of kaolinite, humic acid (HA), and kaolinite/HA mixture on U(VI) retention and release in saturated heterogeneous media was examined. Media heterogeneity significantly influenced U fate and transport behavior in saturated subsurface environment. The presence of kaolinite, HA, and kaolinite/HA enhanced the mobility of U in heterogeneous media, and the mobility of U was the highest in the presence of kaolinite/HA and the lowest in the presence of kaolinite. In the presence of kaolinite, there was no difference in the amount of U released from the FFD and SFD. However, in the presence of HA and kaolinite/HA, a higher amount of U was released from the FFD. The findings in this study showed that medium structure and mineral colloids, as well as natural organic matter in the aqueous phase had significant effects on U transport and fate in subsurface environment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Opus: A Coordination Language for Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John
1997-01-01
Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
Modelling brain emergent behaviours through coevolution of neural agents.
Maniadakis, Michail; Trahanias, Panos
2006-06-01
Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
Application of ultrasound processed images in space: Quanitative assessment of diffuse affectations
NASA Astrophysics Data System (ADS)
Pérez-Poch, A.; Bru, C.; Nicolau, C.
The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.
Application of ultrasound processed images in space: assessing diffuse affectations
NASA Astrophysics Data System (ADS)
Pérez-Poch, A.; Bru, C.; Nicolau, C.
The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.
Stroke Rehabilitation using Virtual Environments
Fu, Michael J.; Knutson, Jayme; Chae, John
2015-01-01
Synopsis This review covers the rationale, mechanisms, and availability of commercially available virtual environment-based interventions for stroke rehabilitation. It describes interventions for motor, speech, cognitive, and sensory dysfunction. Also discussed are the important features and mechanisms that allow virtual environments to facilitate motor relearning. A common challenge facing the field is inability to translate success in small trials to efficacy in larger populations. The heterogeneity of stroke pathophysiology has been blamed and experts advocate for the study of multimodal approaches. Therefore, this article also introduces a framework to help define new therapy combinations that may be necessary to address stroke heterogeneity. PMID:26522910
Research on detecting heterogeneous fibre from cotton based on linear CCD camera
NASA Astrophysics Data System (ADS)
Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei
2009-07-01
The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.
NASA Astrophysics Data System (ADS)
Jung, Jin Woo; Lee, Jung-Seob; Cho, Dong-Woo
2016-02-01
Recently, much attention has focused on replacement or/and enhancement of biological tissues via the use of cell-laden hydrogel scaffolds with an architecture that mimics the tissue matrix, and with the desired three-dimensional (3D) external geometry. However, mimicking the heterogeneous tissues that most organs and tissues are formed of is challenging. Although multiple-head 3D printing systems have been proposed for fabricating heterogeneous cell-laden hydrogel scaffolds, to date only the simple exterior form has been realized. Here we describe a computer-aided design and manufacturing (CAD/CAM) system for this application. We aim to develop an algorithm to enable easy, intuitive design and fabrication of a heterogeneous cell-laden hydrogel scaffolds with a free-form 3D geometry. The printing paths of the scaffold are automatically generated from the 3D CAD model, and the scaffold is then printed by dispensing four materials; i.e., a frame, two kinds of cell-laden hydrogel and a support. We demonstrated printing of heterogeneous tissue models formed of hydrogel scaffolds using this approach, including the outer ear, kidney and tooth tissue. These results indicate that this approach is particularly promising for tissue engineering and 3D printing applications to regenerate heterogeneous organs and tissues with tailored geometries to treat specific defects or injuries.
Jung, Jin Woo; Lee, Jung-Seob; Cho, Dong-Woo
2016-02-22
Recently, much attention has focused on replacement or/and enhancement of biological tissues via the use of cell-laden hydrogel scaffolds with an architecture that mimics the tissue matrix, and with the desired three-dimensional (3D) external geometry. However, mimicking the heterogeneous tissues that most organs and tissues are formed of is challenging. Although multiple-head 3D printing systems have been proposed for fabricating heterogeneous cell-laden hydrogel scaffolds, to date only the simple exterior form has been realized. Here we describe a computer-aided design and manufacturing (CAD/CAM) system for this application. We aim to develop an algorithm to enable easy, intuitive design and fabrication of a heterogeneous cell-laden hydrogel scaffolds with a free-form 3D geometry. The printing paths of the scaffold are automatically generated from the 3D CAD model, and the scaffold is then printed by dispensing four materials; i.e., a frame, two kinds of cell-laden hydrogel and a support. We demonstrated printing of heterogeneous tissue models formed of hydrogel scaffolds using this approach, including the outer ear, kidney and tooth tissue. These results indicate that this approach is particularly promising for tissue engineering and 3D printing applications to regenerate heterogeneous organs and tissues with tailored geometries to treat specific defects or injuries.
A heterogeneous and parallel computing framework for high-resolution hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Smith, Luke; Liang, Qiuhua
2015-04-01
Shock-capturing hydrodynamic models are now widely applied in the context of flood risk assessment and forecasting, accurately capturing the behaviour of surface water over ground and within rivers. Such models are generally explicit in their numerical basis, and can be computationally expensive; this has prohibited full use of high-resolution topographic data for complex urban environments, now easily obtainable through airborne altimetric surveys (LiDAR). As processor clock speed advances have stagnated in recent years, further computational performance gains are largely dependent on the use of parallel processing. Heterogeneous computing architectures (e.g. graphics processing units or compute accelerator cards) provide a cost-effective means of achieving high throughput in cases where the same calculation is performed with a large input dataset. In recent years this technique has been applied successfully for flood risk mapping, such as within the national surface water flood risk assessment for the United Kingdom. We present a flexible software framework for hydrodynamic simulations across multiple processors of different architectures, within multiple computer systems, enabled using OpenCL and Message Passing Interface (MPI) libraries. A finite-volume Godunov-type scheme is implemented using the HLLC approach to solving the Riemann problem, with optional extension to second-order accuracy in space and time using the MUSCL-Hancock approach. The framework is successfully applied on personal computers and a small cluster to provide considerable improvements in performance. The most significant performance gains were achieved across two servers, each containing four NVIDIA GPUs, with a mix of K20, M2075 and C2050 devices. Advantages are found with respect to decreased parametric sensitivity, and thus in reducing uncertainty, for a major fluvial flood within a large catchment during 2005 in Carlisle, England. Simulations for the three-day event could be performed on a 2m grid within a few hours. In the context of a rapid pluvial flood event in Newcastle upon Tyne during 2012, the technique allows simulation of inundation for a 31km2 of the city centre in less than an hour on a 2m grid; however, further grid refinement is required to fully capture important smaller flow pathways. Good agreement between the model and observed inundation is achieved for a variety of dam failure, slow fluvial inundation, rapid pluvial inundation, and defence breach scenarios in the UK.
NASA Astrophysics Data System (ADS)
Das Mahanta, Debasish; Rana, Debkumar; Patra, Animesh; Mukherjee, Biswaroop; Mitra, Rajib Kumar
2018-05-01
Water is often found in (micro)-heterogeneous environments and therefore it is necessary to understand their H-bonded network structure in such altered environments. We explore the structure and dynamics of water in its binary mixture with relatively less polar small biocompatible amphiphilic molecule 1,2-Dimethoxyethane (DME) by a combined spectroscopic and molecular dynamics (MD) simulation study. Picosecond (ps) resolved fluorescence spectroscopy using coumarin 500 as the fluorophore establishes a non-monotonic behaviour of the mixture. Simulation studies also explore the various possible H-bond formations between water and DME. The relative abundance of such different water species manifests the heterogeneity in the mixture.
Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling
NASA Astrophysics Data System (ADS)
Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.
This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.
A View on Future Building System Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael
This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described bymore » coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).« less
A platform for quantitative evaluation of intratumoral spatial heterogeneity in multiplexed immunofluorescence images, via characterization of the spatial interactions between different cellular phenotypes and non-cellular constituents in the tumor microenvironment.
NASA Astrophysics Data System (ADS)
Thakore, Arun K.; Sauer, Frank
1994-05-01
The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.
Stanley, Ryan; Snelgrove, Paul V R; Deyoung, Brad; Gregory, Robert S
2012-01-01
During the pelagic larval phase, fish dispersal may be influenced passively by surface currents or actively determined by swimming behaviour. In situ observations of larval swimming are few given the constraints of field sampling. Active behaviour is therefore often inferred from spatial patterns in the field, laboratory studies, or hydrodynamic theory, but rarely are these approaches considered in concert. Ichthyoplankton survey data collected during 2004 and 2006 from coastal Newfoundland show that changes in spatial heterogeneity for multiple species do not conform to predictions based on passive transport. We evaluated the interaction of individual larvae with their environment by calculating Reynolds number as a function of ontogeny. Typically, larvae hatch into a viscous environment in which swimming is inefficient, and later grow into more efficient intermediate and inertial swimming environments. Swimming is therefore closely related to length, not only because of swimming capacity but also in how larvae experience viscosity. Six of eight species sampled demonstrated consistent changes in spatial patchiness and concomitant increases in spatial heterogeneity as they transitioned into more favourable hydrodynamic swimming environments, suggesting an active behavioural element to dispersal. We propose the tandem assessment of spatial heterogeneity and hydrodynamic environment as a potential approach to understand and predict the onset of ecologically significant swimming behaviour of larval fishes in the field.
Jorgensen, Tove H.
2012-01-01
Background and Aims The biotic and abiotic environment of interacting hosts and parasites may vary considerably over small spatial and temporal scales. It is essential to understand how different environments affect host disease resistance because this determines frequency of disease and, importantly, heterogeneous environments can retard direct selection and potentially maintain genetic variation for resistance in natural populations. Methods The effect of different temperatures and soil nutrient conditions on the outcome of infection by a pathogen was quantified in Arabidopsis thaliana. Expression levels of a gene conferring resistance to powdery mildews, RPW8, were compared with levels of disease to test a possible mechanism behind variation in resistance. Key Results Most host genotypes changed from susceptible to resistant across environments with the ranking of genotypes differing between treatments. Transcription levels of RPW8 increased after infection and varied between environments, but there was no tight association between transcription and resistance levels. Conclusions There is a strong potential for a heterogeneous environment to change the resistance capacity of A. thaliana genotypes and hence the direction and magnitude of selection in the presence of the pathogen. Possible causative links between resistance gene expression and disease resistance are discussed in light of the present results on RPW8. PMID:22234559
Thermal Coefficient of Linear Expansion Modified by Dendritic Segregation in Nickel-Iron Alloys
NASA Astrophysics Data System (ADS)
Ogorodnikova, O. M.; Maksimova, E. V.
2018-05-01
The paper presents investigations of thermal properties of Fe-Ni and Fe-Ni-Co casting alloys affected by the heterogeneous distribution of their chemical elements. It is shown that nickel dendritic segregation has a negative effect on properties of studied invars. A mathematical model is proposed to explore the influence of nickel dendritic segregation on the thermal coefficient of linear expansion (TCLE) of the alloy. A computer simulation of TCLE of Fe-Ni-Co superinvars is performed with regard to a heterogeneous distribution of their chemical elements over the whole volume. The ProLigSol computer software application is developed for processing the data array and results of computer simulation.
Modeling Endovascular Coils as Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.
2016-12-01
Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.
A computational model was developed to simulate aquifer remediation by pump and treat for a confined, perfectly stratified aquifer. plit-operator finite element numerical technique was utilized to incorporate flow field heterogeneity and nonequilibrium sorption into a two-dimensi...
On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2004-01-01
Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.
Room-temperature ionic liquids: slow dynamics, viscosity, and the red edge effect.
Hu, Zhonghan; Margulis, Claudio J
2007-11-01
Ionic liquids (ILs) have recently attracted significant attention from academic and industrial sources. This is because, while their vapor pressures are negligible, many of them are liquids at room temperature and can dissolve a wide range of polar and nonpolar organic and inorganic molecules. In this Account, we discuss the progress of our laboratory in understanding the dynamics, spectroscopy, and fluid dynamics of selected imidazolium-based ILs using computational and analytical tools that we have recently developed. Our results indicate that the red edge effect, the non-Newtonian behavior, and the existence of locally heterogeneous environments on a time scale relevant to chemical and photochemical reactivity are closely linked to the viscosity and highly structured character of these liquids.
NASA Astrophysics Data System (ADS)
Golod, V. M.; Sufiiarov, V. Sh
2017-04-01
Gas atomization is a high-performance process for manufacturing superfine metal powders. Formation of the powder particles takes place primarily through the fragmentation of alloy melt flow with high-pressure inert gas, which leads to the formation of non-uniform sized micron-scale particles and subsequent their rapid solidification due to heat exchange with gas environment. The article presents results of computer modeling of crystallization process, simulation and experimental studies of the cellular-dendrite structure formation and microsegregation in different size particles. It presents results of adaptation of the approach for local nonequilibrium solidification to conditions of crystallization at gas atomization, detected border values of the particle size at which it is possible a manifestation of diffusionless crystallization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-05-01
This report is a summary of the proceedings from the Minitrack on Data and Knowledge Base Issues in Genomics at the 27th Hawaii International Conference on System Science, January 4 - 7, 1994. The minitrack was organized by Dong-Guk Shin (University of Connecticut) and Francois Rechenmann (INRIA, France). Support was jointly provided by the NSF, NIH and DOE. The minitrack included, after rigorous review, ten full papers and four extended abstracts in the following five different research subareas of genome informatics: data modeling and management, sequence analysis, graphical user interface, interoperation in a heterogenous computing environment, and system integration inmore » a knowledge-based approach.« less
Brain Performance versus Phase Transitions
NASA Astrophysics Data System (ADS)
Torres, Joaquín J.; Marro, J.
2015-07-01
We here illustrate how a well-founded study of the brain may originate in assuming analogies with phase-transition phenomena. Analyzing to what extent a weak signal endures in noisy environments, we identify the underlying mechanisms, and it results a description of how the excitability associated to (non-equilibrium) phase changes and criticality optimizes the processing of the signal. Our setting is a network of integrate-and-fire nodes in which connections are heterogeneous with rapid time-varying intensities mimicking fatigue and potentiation. Emergence then becomes quite robust against wiring topology modification—in fact, we considered from a fully connected network to the Homo sapiens connectome—showing the essential role of synaptic flickering on computations. We also suggest how to experimentally disclose significant changes during actual brain operation.
OpenID Connect as a security service in cloud-based medical imaging systems
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-01-01
Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682
Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajic, H.L.; Ougouag, A.M.
1987-01-01
Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less
Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...
2015-07-13
Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less
An approach for drag correction based on the local heterogeneity for gas-solid flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Wang, Limin; Rogers, William
2016-09-22
The drag models typically used for gas-solids interaction are mainly developed based on homogeneous systems of flow passing fixed particle assembly. It has been shown that the heterogeneous structures, i.e., clusters and bubbles in fluidized beds, need to be resolved to account for their effect in the numerical simulations. Since the heterogeneity is essentially captured through the local concentration gradient in the computational cells, this study proposes a simple approach to account for the non-uniformity of solids spatial distribution inside a computational cell and its effect on the interaction between gas and solid phases. Finally, to validate this approach, themore » predicted drag coefficient has been compared to the results from direct numerical simulations. In addition, the need to account for this type of heterogeneity is discussed for a periodic riser flow simulation with highly resolved numerical grids and the impact of the proposed correction for drag is demonstrated.« less
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction
Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying
2016-01-01
Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models. PMID:27533456
HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction.
Chen, Xing; Yan, Chenggang Clarence; Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying
2016-10-04
Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
Dong, Xinzhe; Wu, Peipei; Sun, Xiaorong; Li, Wenwu; Wan, Honglin; Yu, Jinming; Xing, Ligang
2015-06-01
This study aims to explore whether the intra-tumour (18) F-fluorodeoxyglucose (FDG) uptake heterogeneity affects the reliability of target volume definition with FDG positron emission tomography/computed tomography (PET/CT) imaging for nonsmall cell lung cancer (NSCLC) and squamous cell oesophageal cancer (SCEC). Patients with NSCLC (n = 50) or SCEC (n = 50) who received (18)F-FDG PET/CT scanning before treatments were included in this retrospective study. Intra-tumour FDG uptake heterogeneity was assessed by visual scoring, the coefficient of variation (COV) of the standardised uptake value (SUV) and the image texture feature (entropy). Tumour volumes (gross tumour volume (GTV)) were delineated on the CT images (GTV(CT)), the fused PET/CT images (GTV(PET-CT)) and the PET images, using a threshold at 40% SUV(max) (GTV(PET40%)) or the SUV cut-off value of 2.5 (GTV(PET2.5)). The correlation between the FDG uptake heterogeneity parameters and the differences in tumour volumes among GTV(CT), GTV(PET-CT), GTV(PET40%) and GTV(PET2.5) was analysed. For both NSCLC and SCEC, obvious correlations were found between uptake heterogeneity, SUV or tumour volumes. Three types of heterogeneity parameters were consistent and closely related to each other. Substantial differences between the four methods of GTV definition were found. The differences between the GTV correlated significantly with PET heterogeneity defined with the visual score, the COV or the textural feature-entropy for NSCLC and SCEC. In tumours with a high FDG uptake heterogeneity, a larger GTV delineation difference was found. Advance image segmentation algorithms dealing with tracer uptake heterogeneity should be incorporated into the treatment planning system. © 2015 The Royal Australian and New Zealand College of Radiologists.
Geologic Controls on the Growth of Petroleum Reserves
Fishman, Neil S.; Turner, Christine E.; Peterson, Fred; Dyman, Thaddeus S.; Cook, Troy
2008-01-01
The geologic characteristics of selected siliciclastic (largely sandstone) and carbonate (limestone and dolomite) reservoirs in North America (largely the continental United States) were investigated to improve our understanding of the role of geology in the growth of petroleum reserves. Reservoirs studied were deposited in (1) eolian environments (Jurassic Norphlet Formation of the Gulf Coast and Pennsylvanian-Permian Minnelusa Formation of the Powder River Basin), (2) interconnected fluvial, deltaic, and shallow marine environments (Oligocene Frio Formation of the Gulf Coast and the Pennsylvanian Morrow Formation of the Anadarko and Denver Basins), (3) deeper marine environments (Mississippian Barnett Shale of the Fort Worth Basin and Devonian-Mississippian Bakken Formation of the Williston Basin), (4) marine carbonate environments (Ordovician Ellenburger Group of the Permian Basin and Jurassic Smackover Formation of the Gulf of Mexico Basin), (5) a submarine fan environment (Permian Spraberry Formation of the Midland Basin), and (6) a fluvial environment (Paleocene-Eocene Wasatch Formation of the Uinta-Piceance Basin). The connection between an oil reservoir's production history and geology was also evaluated by studying production histories of wells in disparate reservoir categories and wells in a single formation containing two reservoir categories. This effort was undertaken to determine, in general, if different reservoir production heterogeneities could be quantified on the basis of gross geologic differences. It appears that reserve growth in existing fields is most predictable for those in which reservoir heterogeneity is low and thus production differs little between wells, probably owing to relatively homogeneous fluid flow. In fields in which reservoirs are highly heterogeneous, prediction of future growth from infill drilling is notably more difficult. In any case, success at linking heterogeneity to reserve growth depends on factors in addition to geology, such as engineering and technological advances and political or cultural or economic influences.
Sugiura, D; Tateno, M
2013-08-01
We investigated the nitrogen and carbohydrate allocation patterns of trees under heterogeneous light environments using saplings of the devil maple tree (Acer diabolicum) with Y-shaped branches. Different branch groups were created: all branches of a sapling exposed to full light (L-branches), all branches exposed to full shade (S-branches), and half of the branches of a sapling exposed to light (HL-branches) and the other half exposed to shade (HS-branches). Throughout the growth period, nitrogen was preferentially allocated to HL-branches, whereas nitrogen allocation to HS-branches was suppressed compared to L- and S-branches. HL-branches with the highest leaf nitrogen content (N(area)) also had the highest rates of growth, and HS-branches with the lowest N(area) had the lowest observed growth rates. In addition, net nitrogen assimilation, estimated using a photosynthesis model, was strongly correlated with branch growth and whole-plant growth. In contrast, patterns of photosynthate allocation to branches and roots were not affected by the light conditions of the other branch. These observations suggest that tree canopies develop as a result of resource allocation patterns, where the growth of sun-lit branches is favoured over shaded branches, which leads to enhanced whole-plant growth in heterogeneous light environments. Our results indicate that whole-plant growth is enhanced by the resource allocation patterns created for saplings in heterogeneous light environments.
NASA Astrophysics Data System (ADS)
Libera, A.; Henri, C.; de Barros, F.
2017-12-01
Heterogeneities in natural porous formations, mainly manifested through the hydraulic conductivity (K) and, to a lesser degree, the porosity (Φ), largely control subsurface flow and solute transport. The influence of the heterogeneous structure of K on flow and solute transport processes has been widely studied, whereas less attention is dedicated to the joint heterogeneity of conductivity and porosity fields. Our study employs computational tools to investigate the joint effect of the spatial variabilities of K and Φ on the transport behavior of a solute plume. We explore multiple scenarios, characterized by different levels of heterogeneity of the geological system, and compare the computational results from the joint K and Φ heterogeneous system with the results originating from the generally adopted constant porosity case. In our work, we assume that the heterogeneous porosity is positively correlated to hydraulic conductivity. We perform numerical Monte Carlo simulations of conservative and reactive contaminant transport in a 3D aquifer. Contaminant mass and plume arrival times at multiple control planes and/or pumping wells operating under different extraction rates are analyzed. We employ different probabilistic metrics to quantify the risk at the monitoring locations, e.g., increased lifetime cancer risk and exceedance of Maximum Contaminant Levels (MCLs), under multiple transport scenarios (i.e., different levels of heterogeneity, conservative or reactive solutes and different contaminant species). Results show that early and late arrival times of the solute mass at the selected sensitive locations (i.e. control planes/pumping wells) as well as risk metrics are strongly influenced by the spatial variability of the Φ field.
Study of selected phenotype switching strategies in time varying environment
NASA Astrophysics Data System (ADS)
Horvath, Denis; Brutovsky, Branislav
2016-03-01
Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback-Leibler functional distances and the Hamming distance.
A heterogeneous hierarchical architecture for real-time computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skroch, D.A.; Fornaro, R.J.
The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Real-time simulation of contact and cutting of heterogeneous soft-tissues.
Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian
2014-02-01
This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.
A computational model was developed to simulate aquifer remediation by pump and treat for a confined, perfectly stratified aquifer. A split-operator finite element numerical technique was utilized to incorporate flow field heterogeneity and nonequilibrium sorption into a two-dime...
NASA Astrophysics Data System (ADS)
Leidi, Tiziano; Scocchi, Giulio; Grossi, Loris; Pusterla, Simone; D'Angelo, Claudio; Thiran, Jean-Philippe; Ortona, Alberto
2012-11-01
In recent decades, finite element (FE) techniques have been extensively used for predicting effective properties of random heterogeneous materials. In the case of very complex microstructures, the choice of numerical methods for the solution of this problem can offer some advantages over classical analytical approaches, and it allows the use of digital images obtained from real material samples (e.g., using computed tomography). On the other hand, having a large number of elements is often necessary for properly describing complex microstructures, ultimately leading to extremely time-consuming computations and high memory requirements. With the final objective of reducing these limitations, we improved an existing freely available FE code for the computation of effective conductivity (electrical and thermal) of microstructure digital models. To allow execution on hardware combining multi-core CPUs and a GPU, we first translated the original algorithm from Fortran to C, and we subdivided it into software components. Then, we enhanced the C version of the algorithm for parallel processing with heterogeneous processors. With the goal of maximizing the obtained performances and limiting resource consumption, we utilized a software architecture based on stream processing, event-driven scheduling, and dynamic load balancing. The parallel processing version of the algorithm has been validated using a simple microstructure consisting of a single sphere located at the centre of a cubic box, yielding consistent results. Finally, the code was used for the calculation of the effective thermal conductivity of a digital model of a real sample (a ceramic foam obtained using X-ray computed tomography). On a computer equipped with dual hexa-core Intel Xeon X5670 processors and an NVIDIA Tesla C2050, the parallel application version features near to linear speed-up progression when using only the CPU cores. It executes more than 20 times faster when additionally using the GPU.
Nürnberger, Fabian; Steffan-Dewenter, Ingolf; Härtel, Stephan
2017-01-01
The instructive component of waggle dance communication has been shown to increase resource uptake of Apis mellifera colonies in highly heterogeneous resource environments, but an assessment of its relevance in temperate landscapes with different levels of resource heterogeneity is currently lacking. We hypothesized that the advertisement of resource locations via dance communication would be most relevant in highly heterogeneous landscapes with large spatial variation of floral resources. To test our hypothesis, we placed 24 Apis mellifera colonies with either disrupted or unimpaired instructive component of dance communication in eight Central European agricultural landscapes that differed in heterogeneity and resource availability. We monitored colony weight change and pollen harvest as measure of foraging success. Dance disruption did not significantly alter colony weight change, but decreased pollen harvest compared to the communicating colonies by 40%. There was no general effect of resource availability on nectar or pollen foraging success, but the effect of landscape heterogeneity on nectar uptake was stronger when resource availability was high. In contrast to our hypothesis, the effects of disrupted bee communication on nectar and pollen foraging success were not stronger in landscapes with heterogeneous compared to homogenous resource environments. Our results indicate that in temperate regions intra-colonial communication of resource locations benefits pollen foraging more than nectar foraging, irrespective of landscape heterogeneity. We conclude that the so far largely unexplored role of dance communication in pollen foraging requires further consideration as pollen is a crucial resource for colony development and health.
Computers and Cooperative Learning. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
This guide focuses on the use of computers and cooperative learning techniques in classrooms that include students with disabilities. The guide outlines the characteristics of cooperative learning such as goal interdependence, individual accountability, and heterogeneous groups, emphasizing the value of each group member. Several cooperative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumann, K; Weber, U; Simeonov, Y
2015-06-15
Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in amore » typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.« less
OpenID connect as a security service in Cloud-based diagnostic imaging systems
NASA Astrophysics Data System (ADS)
Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter
2015-03-01
The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
Heterogeneous compute in computer vision: OpenCL in OpenCV
NASA Astrophysics Data System (ADS)
Gasparakis, Harris
2014-02-01
We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.
NASA Astrophysics Data System (ADS)
Hyde, B. C.; Tait, K. T.; Nicklin, I.; Day, J. M. D.; Ash, R. D.; Moser, D. E.
2013-09-01
Sectioning of meteorites is usually done in an arbitrary manner. We used micro-computed tomography to view the interior of brachinite NWA 4872. A cut was then made through an area of interest. Heterogeneity and modal abundance are discussed.
NASA Astrophysics Data System (ADS)
Zuend, A.; Marcolli, C.; Peter, T.
2009-04-01
The chemical composition of organic-inorganic aerosols is linked to several processes and specific topics in the field of atmospheric aerosol science. Photochemical oxidation of organics in the gas phase lowers the volatility of semi-volatile compounds and contributes to the particulate matter by gas/particle partitioning. Heterogeneous chemistry and changes in the ambient relative humidity influence the aerosol composition as well. Molecular interactions between condensed phase species show typically non-ideal thermodynamic behavior. Liquid-liquid phase separations into a mainly polar, aqueous and a less polar, organic phase may considerably influence the gas/particle partitioning of semi-volatile organics and inorganics (Erdakos and Pankow, 2004; Chang and Pankow, 2006). Moreover, the phases present in the aerosol particles feed back on the heterogeneous, multi-phase chemistry, influence the scattering and absorption of radiation and affect the CCN ability of the particles. Non-ideal thermodynamic behavior in mixtures is usually described by an expression for the excess Gibbs energy, enabling the calculation of activity coefficients. We use the group-contribution model AIOMFAC (Zuend et al., 2008) to calculate activity coefficients, chemical potentials and the total Gibbs energy of mixed organic-inorganic systems. This thermodynamic model was combined with a robust global optimization module to compute potential liquid-liquid (LLE) and vapor-liquid-liquid equilibria (VLLE) as a function of particle composition at room temperature. And related to that, the gas/particle partitioning of semi-volatile components. Furthermore, we compute the thermodynamic stability (spinodal limits) of single-phase solutions, which provides information on the process type and kinetics of a phase separation. References Chang, E. I. and Pankow, J. F.: Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water - Part 2: Consideration of phase separation effects by an XUNIFAC model, Atmos. Environ., 40, 6422-6436, 2006. Erdakos, G. B. and Pankow, J. F.: Gas/particle partitioning of neutral and ionizing compounds to single- and multi-phase aerosol particles. 2. Phase separation in liquid particulate matter containing both polar and low-polarity organic compounds, Atmos. Environ., 38, 1005-1013, 2004. Zuend, A., Marcolli, C., Luo, B. P., and Peter, T.: A thermodynamic model of mixed organic-inorganic aerosols to predict activity coefficients, Atmos. Chem. Phys., 8, 4559-4593, 2008.
NASA Astrophysics Data System (ADS)
Lewis, M. A.; McKenzie, H.; Merrill, E.
2010-12-01
In this talk I will outline first passage time analysis for animals undertaking complex movement patterns, and will demonstrate how first passage time can be used to derive functional responses in predator prey systems. The result is a new approach to understanding type III functional responses based on a random walk model. I will extend the analysis to heterogeneous environments to assess the effects of linear features on functional responses in wolves and elk using GPS tracking data.
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
NASA Astrophysics Data System (ADS)
Kobayashi, H.; Ryu, Y.; Ustin, S.; Baldocchi, D. D.
2009-12-01
B15: Remote Characterization of Vegetation Structure: Including Research to Inform the Planned NASA DESDynI and ESA BIOMASS Missions Title: Spatial radiation environment in a heterogeneous oak woodland using a three-dimensional radiative transfer model and multiple constraints from observations Hideki Kobayashi, Youngryel Ryu, Susan Ustin, and Dennis Baldocchi Abstract Accurate evaluations of radiation environments of visible, near infrared, and thermal infrared wavebands in forest canopies are important to estimate energy, water, and carbon fluxes. Californian oak woodlands are sparse and highly clumped so that radiation environments are extremely heterogeneous spatially. The heterogeneity of radiation environments also varies with wavebands which depend on scattering and emission properties. So far, most of modeling studies have been performed in one dimensional radiative transfer models with (or without) clumping effect in the forest canopies. While some studies have been performed by using three dimensional radiative transfer models, several issues are still unresolved. For example, some 3D models calculate the radiation field with individual tree basis, and radiation interactions among trees are not considered. This interaction could be important in the highly scattering waveband such as near infrared. The objective of this study is to quantify the radiation field in the oak woodland. We developed a three dimensional radiative transfer model, which includes the thermal waveband. Soil/canopy energy balances and canopy physiology models, CANOAK, are incorporated in the radiative transfer model to simulate the diurnal patterns of thermal radiation fields and canopy physiology. Airborne LiDAR and canopy gap data measured by the several methods (digital photographs and plant canopy analyzer) were used to constrain the forest structures such as tree positions, crown sizes and leaf area density. Modeling results were tested by a traversing radiometer system that measured incoming photosynthetically active radiation and net radiation at forest floor and spatial variations in canopy reflectances taken by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). In this study, we show how the model with available measurements can reproduce the spatially heterogeneous radiation environments in the oak woodland.
Jung, Jin Woo; Lee, Jung-Seob; Cho, Dong-Woo
2016-01-01
Recently, much attention has focused on replacement or/and enhancement of biological tissues via the use of cell-laden hydrogel scaffolds with an architecture that mimics the tissue matrix, and with the desired three-dimensional (3D) external geometry. However, mimicking the heterogeneous tissues that most organs and tissues are formed of is challenging. Although multiple-head 3D printing systems have been proposed for fabricating heterogeneous cell-laden hydrogel scaffolds, to date only the simple exterior form has been realized. Here we describe a computer-aided design and manufacturing (CAD/CAM) system for this application. We aim to develop an algorithm to enable easy, intuitive design and fabrication of a heterogeneous cell-laden hydrogel scaffolds with a free-form 3D geometry. The printing paths of the scaffold are automatically generated from the 3D CAD model, and the scaffold is then printed by dispensing four materials; i.e., a frame, two kinds of cell-laden hydrogel and a support. We demonstrated printing of heterogeneous tissue models formed of hydrogel scaffolds using this approach, including the outer ear, kidney and tooth tissue. These results indicate that this approach is particularly promising for tissue engineering and 3D printing applications to regenerate heterogeneous organs and tissues with tailored geometries to treat specific defects or injuries. PMID:26899876
Intrinsic islet heterogeneity and gap junction coupling determine spatiotemporal Ca²⁺ wave dynamics.
Benninger, Richard K P; Hutchens, Troy; Head, W Steven; McCaughey, Michael J; Zhang, Min; Le Marchand, Sylvain J; Satin, Leslie S; Piston, David W
2014-12-02
Insulin is released from the islets of Langerhans in discrete pulses that are linked to synchronized oscillations of intracellular free calcium ([Ca(2+)]i). Associated with each synchronized oscillation is a propagating calcium wave mediated by Connexin36 (Cx36) gap junctions. A computational islet model predicted that waves emerge due to heterogeneity in β-cell function throughout the islet. To test this, we applied defined patterns of glucose stimulation across the islet using a microfluidic device and measured how these perturbations affect calcium wave propagation. We further investigated how gap junction coupling regulates spatiotemporal [Ca(2+)]i dynamics in the face of heterogeneous glucose stimulation. Calcium waves were found to originate in regions of the islet having elevated excitability, and this heterogeneity is an intrinsic property of islet β-cells. The extent of [Ca(2+)]i elevation across the islet in the presence of heterogeneity is gap-junction dependent, which reveals a glucose dependence of gap junction coupling. To better describe these observations, we had to modify the computational islet model to consider the electrochemical gradient between neighboring β-cells. These results reveal how the spatiotemporal [Ca(2+)]i dynamics of the islet depend on β-cell heterogeneity and cell-cell coupling, and are important for understanding the regulation of coordinated insulin release across the islet. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Heterogeneous continuous-time random walks
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Tupikina, Liubov
2018-01-01
We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.
NASA Astrophysics Data System (ADS)
Samsudin, Sarah Hanim; Shafri, Helmi Z. M.; Hamedianfar, Alireza
2016-04-01
Status observations of roofing material degradation are constantly evolving due to urban feature heterogeneities. Although advanced classification techniques have been introduced to improve within-class impervious surface classifications, these techniques involve complex processing and high computation times. This study integrates field spectroscopy and satellite multispectral remote sensing data to generate degradation status maps of concrete and metal roofing materials. Field spectroscopy data were used as bases for selecting suitable bands for spectral index development because of the limited number of multispectral bands. Mapping methods for roof degradation status were established for metal and concrete roofing materials by developing the normalized difference concrete condition index (NDCCI) and the normalized difference metal condition index (NDMCI). Results indicate that the accuracies achieved using the spectral indices are higher than those obtained using supervised pixel-based classification. The NDCCI generated an accuracy of 84.44%, whereas the support vector machine (SVM) approach yielded an accuracy of 73.06%. The NDMCI obtained an accuracy of 94.17% compared with 62.5% for the SVM approach. These findings support the suitability of the developed spectral index methods for determining roof degradation statuses from satellite observations in heterogeneous urban environments.
Track classification within wireless sensor network
NASA Astrophysics Data System (ADS)
Doumerc, Robin; Pannetier, Benjamin; Moras, Julien; Dezert, Jean; Canevet, Loic
2017-05-01
In this paper, we present our study on track classification by taking into account environmental information and target estimated states. The tracker uses several motion model adapted to different target dynamics (pedestrian, ground vehicle and SUAV, i.e. small unmanned aerial vehicle) and works in centralized architecture. The main idea is to explore both: classification given by heterogeneous sensors and classification obtained with our fusion module. The fusion module, presented in his paper, provides a class on each track according to track location, velocity and associated uncertainty. To model the likelihood on each class, a fuzzy approach is used considering constraints on target capability to move in the environment. Then the evidential reasoning approach based on Dempster-Shafer Theory (DST) is used to perform a time integration of this classifier output. The fusion rules are tested and compared on real data obtained with our wireless sensor network.In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of this system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
NASA Astrophysics Data System (ADS)
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
Cooperation and heterogeneity of the autistic mind.
Yoshida, Wako; Dziobek, Isabel; Kliemann, Dorit; Heekeren, Hauke R; Friston, Karl J; Dolan, Ray J
2010-06-30
Individuals with autism spectrum conditions (ASCs) have a core difficulty in recursively inferring the intentions of others. The precise cognitive dysfunctions that determine the heterogeneity at the heart of this spectrum, however, remains unclear. Furthermore, it remains possible that impairment in social interaction is not a fundamental deficit but a reflection of deficits in distinct cognitive processes. To better understand heterogeneity within ASCs, we employed a game-theoretic approach to characterize unobservable computational processes implicit in social interactions. Using a social hunting game with autistic adults, we found that a selective difficulty representing the level of strategic sophistication of others, namely inferring others' mindreading strategy, specifically predicts symptom severity. In contrast, a reduced ability in iterative planning was predicted by overall intellectual level. Our findings provide the first quantitative approach that can reveal the underlying computational dysfunctions that generate the autistic "spectrum."
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
Stanley, Ryan; Snelgrove, Paul V. R.; deYoung, Brad; Gregory, Robert S.
2012-01-01
During the pelagic larval phase, fish dispersal may be influenced passively by surface currents or actively determined by swimming behaviour. In situ observations of larval swimming are few given the constraints of field sampling. Active behaviour is therefore often inferred from spatial patterns in the field, laboratory studies, or hydrodynamic theory, but rarely are these approaches considered in concert. Ichthyoplankton survey data collected during 2004 and 2006 from coastal Newfoundland show that changes in spatial heterogeneity for multiple species do not conform to predictions based on passive transport. We evaluated the interaction of individual larvae with their environment by calculating Reynolds number as a function of ontogeny. Typically, larvae hatch into a viscous environment in which swimming is inefficient, and later grow into more efficient intermediate and inertial swimming environments. Swimming is therefore closely related to length, not only because of swimming capacity but also in how larvae experience viscosity. Six of eight species sampled demonstrated consistent changes in spatial patchiness and concomitant increases in spatial heterogeneity as they transitioned into more favourable hydrodynamic swimming environments, suggesting an active behavioural element to dispersal. We propose the tandem assessment of spatial heterogeneity and hydrodynamic environment as a potential approach to understand and predict the onset of ecologically significant swimming behaviour of larval fishes in the field. PMID:23029455
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Interactive effects of genotype x environment on the live weight of GIFT Nile tilapias.
Oliveira, Sheila N DE; Ribeiro, Ricardo P; Oliveira, Carlos A L DE; Alexandre, Luiz; Oliveira, Aline M S; Lopera-Barrero, Nelson M; Santander, Victor F A; Santana, Renan A C
2017-01-01
In this paper, the existence of a genotype x environment interaction for the average daily weight in GIFT Nile tilapia (Oreochromis niloticus) in different regions in the state of Paraná (Brazil) was analyzed. The heritability results were high in the uni-characteristic analysis: 0.71, 0.72 and 0.67 for the cities of Palotina (PL), Floriano (FL) and Diamond North (DN), respectively. Genetic correlations estimated in bivariate analyzes were weak with values between 0.12 for PL-FL, 0.06 for PL and 0.23 for DN-FL-DN. The Spearman correlation values were low, which indicated a change in ranking in the selection of animals in different environments in the study. There was heterogeneity in the phenotypic variance among the three regions and heterogeneity in the residual variance between PL and DN. The direct genetic gain was greater for the region with a DN value gain of 198.24 g/generation, followed by FL (98.73 g/generation) and finally PL (98.73 g/generation). The indirect genetic gains were lower than 0.37 and greater than 0.02 g/generation. The evidence of the genotype x environment interaction was verified, which indicated the phenotypic heterogeneity of the variances among the three regions, weak genetic correlation and modified rankings in the different environments.
NASA Astrophysics Data System (ADS)
Li, Bing-Wei; Cao, Xiao-Zhi; Fu, Chenbo
2017-12-01
Many biological and chemical systems could be modeled by a population of oscillators coupled indirectly via a dynamical environment. Essentially, the environment by which the individual element communicates with each other is heterogeneous. Nevertheless, most of previous works considered the homogeneous case only. Here we investigated the dynamical behaviors in a population of spatially distributed chaotic oscillators immersed in a heterogeneous environment. Various dynamical synchronization states (such as oscillation death, phase synchronization, and complete synchronized oscillation) as well as their transitions were explored. In particular, we uncovered a non-traditional quorum sensing transition: increasing the population density leaded to a transition from oscillation death to synchronized oscillation at first, but further increasing the density resulted in degeneration from complete synchronization to phase synchronization or even from phase synchronization to desynchronization. The underlying mechanism of this finding was attributed to the dual roles played by the population density. What's more, by treating the environment as another component of the oscillator, the full system was then effectively equivalent to a locally coupled system. This fact allowed us to utilize the master stability functions approach to predict the occurrence of complete synchronization oscillation, which agreed with that from the direct numerical integration of the system. The potential candidates for the experimental realization of our model were also discussed.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Bio-inspired Autonomic Structures: a middleware for Telecommunications Ecosystems
NASA Astrophysics Data System (ADS)
Manzalini, Antonio; Minerva, Roberto; Moiso, Corrado
Today, people are making use of several devices for communications, for accessing multi-media content services, for data/information retrieving, for processing, computing, etc.: examples are laptops, PDAs, mobile phones, digital cameras, mp3 players, smart cards and smart appliances. One of the most attracting service scenarios for future Telecommunications and Internet is the one where people will be able to browse any object in the environment they live: communications, sensing and processing of data and services will be highly pervasive. In this vision, people, machines, artifacts and the surrounding space will create a kind of computational environment and, at the same time, the interfaces to the network resources. A challenging technological issue will be interconnection and management of heterogeneous systems and a huge amount of small devices tied together in networks of networks. Moreover, future network and service infrastructures should be able to provide Users and Application Developers (at different levels, e.g., residential Users but also SMEs, LEs, ASPs/Web2.0 Service roviders, ISPs, Content Providers, etc.) with the most appropriate "environment" according to their context and specific needs. Operators must be ready to manage such level of complication enabling their latforms with technological advanced allowing network and services self-supervision and self-adaptation capabilities. Autonomic software solutions, enhanced with innovative bio-inspired mechanisms and algorithms, are promising areas of long term research to face such challenges. This chapter proposes a bio-inspired autonomic middleware capable of leveraging the assets of the underlying network infrastructure whilst, at the same time, supporting the development of future Telecommunications and Internet Ecosystems.
H-BLAST: a fast protein sequence alignment toolkit on heterogeneous computers with GPUs.
Ye, Weicai; Chen, Ying; Zhang, Yongdong; Xu, Yuesheng
2017-04-15
The sequence alignment is a fundamental problem in bioinformatics. BLAST is a routinely used tool for this purpose with over 118 000 citations in the past two decades. As the size of bio-sequence databases grows exponentially, the computational speed of alignment softwares must be improved. We develop the heterogeneous BLAST (H-BLAST), a fast parallel search tool for a heterogeneous computer that couples CPUs and GPUs, to accelerate BLASTX and BLASTP-basic tools of NCBI-BLAST. H-BLAST employs a locally decoupled seed-extension algorithm for better performance on GPUs, and offers a performance tuning mechanism for better efficiency among various CPUs and GPUs combinations. H-BLAST produces identical alignment results as NCBI-BLAST and its computational speed is much faster than that of NCBI-BLAST. Speedups achieved by H-BLAST over sequential NCBI-BLASTP (resp. NCBI-BLASTX) range mostly from 4 to 10 (resp. 5 to 7.2). With 2 CPU threads and 2 GPUs, H-BLAST can be faster than 16-threaded NCBI-BLASTX. Furthermore, H-BLAST is 1.5-4 times faster than GPU-BLAST. https://github.com/Yeyke/H-BLAST.git. yux06@syr.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T
2012-06-01
The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.
2017-12-01
Heterogeneous reactions of solid particles in a gaseous environment are of increasing interest; however, most of the heterogeneous chemistry studies of airborne solids were conducted on particle ensembles. A close examination on the heterogeneous chemistry between single particles and gaseous-environment species is the key to elucidate the fundamental mechanisms of hydroscopic growth, cloud nuclei condensation, secondary aerosol formation, etc., and reduce the uncertainty of models in radiative forcing, climate change, and atmospheric chemistry. We demonstrate an optical trapping-Raman spectroscopy (OT-RS) system to study the heterogeneous chemistry of the solid particles in air at single-particle level. Compared to other single-particle techniques, optical trapping offers a non-invasive, flexible, and stable method to isolate single solid particle from substrates. Benefited from two counter-propagating hollow beams, the optical trapping configuration is adaptive to trap a variety of particles with different materials from inorganic substitution (carbon nanotubes, silica, etc.) to organic, dye-doped polymers and bioaerosols (spores, pollen, etc.), with different optical properties from transparent to strongly absorbing, with different sizes from sub-micrometers to tens of microns, or with distinct morphologies from loosely packed nanotubes to microspheres and irregular pollen grains. The particles in the optical trap may stay unchanged, surface degraded, or optically fragmented according to different laser intensity, and their physical and chemical properties are characterized by the Raman spectra and imaging system simultaneously. The Raman spectra is able to distinguish the chemical compositions of different particles, while the synchronized imaging system can resolve their physical properties (sizes, shapes, morphologies, etc.). The temporal behavior of the trapped particles also can be monitored by the OT-RS system at an indefinite time with a resolution from 10 ms to 5 min, which can be further applied to monitor the dynamics of heterogeneous reactions. The OT-RS system provides a flexible method to characterize and monitor the physical properties and heterogeneous chemistry of optically trapped solid particles in gaseous environment at single-particle level.
Enabling Flexible and Continuous Capability Invocation in Mobile Prosumer Environments
Alcarria, Ramon; Robles, Tomas; Morales, Augusto; López-de-Ipiña, Diego; Aguilera, Unai
2012-01-01
Mobile prosumer environments require the communication with heterogeneous devices during the execution of mobile services. These environments integrate sensors, actuators and smart devices, whose availability continuously changes. The aim of this paper is to design a reference architecture for implementing a model for continuous service execution and access to capabilities, i.e., the functionalities provided by these devices. The defined architecture follows a set of software engineering patterns and includes some communication paradigms to cope with the heterogeneity of sensors, actuators, controllers and other devices in the environment. In addition, we stress the importance of the flexibility in capability invocation by allowing the communication middleware to select the access technology and change the communication paradigm when dealing with smart devices, and by describing and evaluating two algorithms for resource access management. PMID:23012526
Wang, Yong-Jian; Shi, Xue-Ping; Meng, Xue-Feng; Wu, Xiao-Jing; Luo, Fang-Li; Yu, Fei-Hai
2016-01-01
Spatial heterogeneity in two co-variable resources such as light and water availability is common and can affect the growth of clonal plants. Several studies have tested effects of spatial heterogeneity in the supply of a single resource on competitive interactions of plants, but none has examined those of heterogeneous distribution of two co-variable resources. In a greenhouse experiment, we grew one (without intraspecific competition) or nine isolated ramets (with competition) of a rhizomatous herb Iris japonica under a homogeneous environment and four heterogeneous environments differing in patch arrangement (reciprocal and parallel patchiness of light and soil water) and patch scale (large and small patches of light and water). Intraspecific competition significantly decreased the growth of I. japonica, but at the whole container level there were no significant interaction effects of competition by spatial heterogeneity or significant effect of heterogeneity on competitive intensity. Irrespective of competition, the growth of I. japonica in the high and the low water patches did not differ significantly in the homogeneous treatments, but it was significantly larger in the high than in the low water patches in the heterogeneous treatments with large patches. For the heterogeneous treatments with small patches, the growth of I. japonica was significantly larger in the high than in the low water patches in the presence of competition, but such an effect was not significant in the absence of competition. Furthermore, patch arrangement and patch scale significantly affected competitive intensity at the patch level. Therefore, spatial heterogeneity in light and water supply can alter intraspecific competition at the patch level and such effects depend on patch arrangement and patch scale. PMID:27375630
Wang, Yong-Jian; Shi, Xue-Ping; Meng, Xue-Feng; Wu, Xiao-Jing; Luo, Fang-Li; Yu, Fei-Hai
2016-01-01
Spatial heterogeneity in two co-variable resources such as light and water availability is common and can affect the growth of clonal plants. Several studies have tested effects of spatial heterogeneity in the supply of a single resource on competitive interactions of plants, but none has examined those of heterogeneous distribution of two co-variable resources. In a greenhouse experiment, we grew one (without intraspecific competition) or nine isolated ramets (with competition) of a rhizomatous herb Iris japonica under a homogeneous environment and four heterogeneous environments differing in patch arrangement (reciprocal and parallel patchiness of light and soil water) and patch scale (large and small patches of light and water). Intraspecific competition significantly decreased the growth of I. japonica, but at the whole container level there were no significant interaction effects of competition by spatial heterogeneity or significant effect of heterogeneity on competitive intensity. Irrespective of competition, the growth of I. japonica in the high and the low water patches did not differ significantly in the homogeneous treatments, but it was significantly larger in the high than in the low water patches in the heterogeneous treatments with large patches. For the heterogeneous treatments with small patches, the growth of I. japonica was significantly larger in the high than in the low water patches in the presence of competition, but such an effect was not significant in the absence of competition. Furthermore, patch arrangement and patch scale significantly affected competitive intensity at the patch level. Therefore, spatial heterogeneity in light and water supply can alter intraspecific competition at the patch level and such effects depend on patch arrangement and patch scale.
Modeling of photon migration in the human lung using a finite volume solver
NASA Astrophysics Data System (ADS)
Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.
2006-02-01
The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).
Bauer, Matthias; Knebel, Johannes; Lechner, Matthias; Pickl, Peter; Frey, Erwin
2017-01-01
Autoinducers are small signaling molecules that mediate intercellular communication in microbial populations and trigger coordinated gene expression via ‘quorum sensing’. Elucidating the mechanisms that control autoinducer production is, thus, pertinent to understanding collective microbial behavior, such as virulence and bioluminescence. Recent experiments have shown a heterogeneous promoter activity of autoinducer synthase genes, suggesting that some of the isogenic cells in a population might produce autoinducers, whereas others might not. However, the mechanism underlying this phenotypic heterogeneity in quorum-sensing microbial populations has remained elusive. In our theoretical model, cells synthesize and secrete autoinducers into the environment, up-regulate their production in this self-shaped environment, and non-producers replicate faster than producers. We show that the coupling between ecological and population dynamics through quorum sensing can induce phenotypic heterogeneity in microbial populations, suggesting an alternative mechanism to stochastic gene expression in bistable gene regulatory circuits. DOI: http://dx.doi.org/10.7554/eLife.25773.001 PMID:28741470
Steffan-Dewenter, Ingolf; Härtel, Stephan
2017-01-01
The instructive component of waggle dance communication has been shown to increase resource uptake of Apis mellifera colonies in highly heterogeneous resource environments, but an assessment of its relevance in temperate landscapes with different levels of resource heterogeneity is currently lacking. We hypothesized that the advertisement of resource locations via dance communication would be most relevant in highly heterogeneous landscapes with large spatial variation of floral resources. To test our hypothesis, we placed 24 Apis mellifera colonies with either disrupted or unimpaired instructive component of dance communication in eight Central European agricultural landscapes that differed in heterogeneity and resource availability. We monitored colony weight change and pollen harvest as measure of foraging success. Dance disruption did not significantly alter colony weight change, but decreased pollen harvest compared to the communicating colonies by 40%. There was no general effect of resource availability on nectar or pollen foraging success, but the effect of landscape heterogeneity on nectar uptake was stronger when resource availability was high. In contrast to our hypothesis, the effects of disrupted bee communication on nectar and pollen foraging success were not stronger in landscapes with heterogeneous compared to homogenous resource environments. Our results indicate that in temperate regions intra-colonial communication of resource locations benefits pollen foraging more than nectar foraging, irrespective of landscape heterogeneity. We conclude that the so far largely unexplored role of dance communication in pollen foraging requires further consideration as pollen is a crucial resource for colony development and health. PMID:28603677
Culumber, Zachary W; Schumer, Molly; Monks, Scott; Tobler, Michael
2015-02-01
Theory predicts that environmental heterogeneity offers a potential solution to the maintenance of genetic variation within populations, but empirical evidence remains sparse. The live-bearing fish Xiphophorus variatus exhibits polymorphism at a single locus, with different alleles resulting in up to five distinct melanistic "tailspot" patterns within populations. We investigated the effects of heterogeneity in two ubiquitous environmental variables (temperature and food availability) on two fitness-related traits (upper thermal limits and body condition) in two different tailspot types (wild-type and upper cut crescent). We found gene-by-environment (G × E) interactions between tailspot type and food level affecting upper thermal limits (UTL), as well as between tailspot type and thermal environment affecting body condition. Exploring mechanistic bases underlying these G × E patterns, we found no differences between tailspot types in hsp70 gene expression despite significant overall increases in expression under both thermal and food stress. Similarly, there was no difference in routine metabolic rates between the tailspot types. The reversal of relative performance of the two tailspot types under different environmental conditions revealed a mechanism by which environmental heterogeneity can balance polymorphism within populations through selection on different fitness-related traits. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Disease Spread and Its Effect on Population Dynamics in Heterogeneous Environment
NASA Astrophysics Data System (ADS)
Upadhyay, Ranjit Kumar; Roy, Parimita
In this paper, an eco-epidemiological model in which both species diffuse along a spatial gradient has been shown to exhibit temporal chaos at a fixed point in space. The proposed model is a modification of the model recently presented by Upadhyay and Roy [2014]. The spatial interactions among the species have been represented in the form of reaction-diffusion equations. The model incorporates the intrinsic growth rate of fish population which varies linearly with the depth of water. Numerical results show that diffusion can drive otherwise stable system into aperiodic behavior with sensitivity to initial conditions. We show that spatially induced chaos plays an important role in spatial pattern formation in heterogeneous environment. Spatiotemporal distributions of species have been simulated using the diffusivity assumptions realistic for natural eco-epidemic systems. We found that in heterogeneous environment, the temporal dynamics of both the species are drastically different and show chaotic behavior. It was also found that the instability observed in the model is due to spatial heterogeneity and diffusion-driven. Cumulative death rate of predator has an appreciable effect on model dynamics as the spatial distribution of all constituent populations exhibit significant changes when this model parameter is changed and it acts as a regularizing factor.
Old models explain new observations of butterfly movement at patch edges.
Crone, Elizabeth E; Schultz, Cheryl B
2008-07-01
Understanding movement in heterogeneous environments is central to predicting how landscape changes affect animal populations. Several recent studies point out an intriguing and distinctive looping behavior by butterflies at habitat patch edges and hypothesize that this behavior requires a new framework for analyzing animal movement. We show that this looping behavior could be caused by a longstanding movement model, biased correlated random walk, with bias toward habitat patches. The ability of this longstanding model to explain recent observations reinforces the point that butterflies respond to habitat heterogeneity and do not move randomly through heterogeneous environments. We discuss the implications of different movement models for predicting butterfly responses to landscape change, and our rationale for retaining longstanding movement models, rather than developing new modeling frameworks for looping behavior at patch edges.
ERIC Educational Resources Information Center
DeVillar, Robert A.; Faltis, Christian J.
This book offers an alternative conceptual framework for effectively incorporating computer use within the heterogeneous classroom. The framework integrates Vygotskian social-learning theory with Allport's contact theory and the principles of cooperative learning. In Part 1 an essential element is identified for each of these areas. These are, in…
ERIC Educational Resources Information Center
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-05-01
The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita
2018-04-01
We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
Leverage hadoop framework for large scale clinical informatics applications.
Dong, Xiao; Bahroos, Neil; Sadhu, Eugene; Jackson, Tommie; Chukhman, Morris; Johnson, Robert; Boyd, Andrew; Hynes, Denise
2013-01-01
In this manuscript, we present our experiences using the Apache Hadoop framework for high data volume and computationally intensive applications, and discuss some best practice guidelines in a clinical informatics setting. There are three main aspects in our approach: (a) process and integrate diverse, heterogeneous data sources using standard Hadoop programming tools and customized MapReduce programs; (b) after fine-grained aggregate results are obtained, perform data analysis using the Mahout data mining library; (c) leverage the column oriented features in HBase for patient centric modeling and complex temporal reasoning. This framework provides a scalable solution to meet the rapidly increasing, imperative "Big Data" needs of clinical and translational research. The intrinsic advantage of fault tolerance, high availability and scalability of Hadoop platform makes these applications readily deployable at the enterprise level cluster environment.
An Experimental Framework for Executing Applications in Dynamic Grid Environments
NASA Technical Reports Server (NTRS)
Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.
Transformation of OODT CAS to Perform Larger Tasks
NASA Technical Reports Server (NTRS)
Mattmann, Chris; Freeborn, Dana; Crichton, Daniel; Hughes, John; Ramirez, Paul; Hardman, Sean; Woollard, David; Kelly, Sean
2008-01-01
A computer program denoted OODT CAS has been transformed to enable performance of larger tasks that involve greatly increased data volumes and increasingly intensive processing of data on heterogeneous, geographically dispersed computers. Prior to the transformation, OODT CAS (also alternatively denoted, simply, 'CAS') [wherein 'OODT' signifies 'Object-Oriented Data Technology' and 'CAS' signifies 'Catalog and Archive Service'] was a proven software component used to manage scientific data from spaceflight missions. In the transformation, CAS was split into two separate components representing its canonical capabilities: file management and workflow management. In addition, CAS was augmented by addition of a resource-management component. This third component enables CAS to manage heterogeneous computing by use of diverse resources, including high-performance clusters of computers, commodity computing hardware, and grid computing infrastructures. CAS is now more easily maintainable, evolvable, and reusable. These components can be used separately or, taking advantage of synergies, can be used together. Other elements of the transformation included addition of a separate Web presentation layer that supports distribution of data products via Really Simple Syndication (RSS) feeds, and provision for full Resource Description Framework (RDF) exports of metadata.
Zimmermann, Matthias; Escrig, Stéphane; Hübschmann, Thomas; Kirf, Mathias K.; Brand, Andreas; Inglis, R. Fredrik; Musat, Niculina; Müller, Susann; Meibom, Anders; Ackermann, Martin; Schreiber, Frank
2015-01-01
Populations of genetically identical microorganisms residing in the same environment can display marked variability in their phenotypic traits; this phenomenon is termed phenotypic heterogeneity. The relevance of such heterogeneity in natural habitats is unknown, because phenotypic characterization of a sufficient number of single cells of the same species in complex microbial communities is technically difficult. We report a procedure that allows to measure phenotypic heterogeneity in bacterial populations from natural environments, and use it to analyze N2 and CO2 fixation of single cells of the green sulfur bacterium Chlorobium phaeobacteroides from the meromictic lake Lago di Cadagno. We incubated lake water with 15N2 and 13CO2 under in situ conditions with and without NH4+. Subsequently, we used flow cell sorting with auto-fluorescence gating based on a pure culture isolate to concentrate C. phaeobacteroides from its natural abundance of 0.2% to now 26.5% of total bacteria. C. phaeobacteroides cells were identified using catalyzed-reporter deposition fluorescence in situ hybridization (CARD-FISH) targeting the 16S rRNA in the sorted population with a species-specific probe. In a last step, we used nanometer-scale secondary ion mass spectrometry to measure the incorporation 15N and 13C stable isotopes in more than 252 cells. We found that C. phaeobacteroides fixes N2 in the absence of NH4+, but not in the presence of NH4+ as has previously been suggested. N2 and CO2 fixation were heterogeneous among cells and positively correlated indicating that N2 and CO2 fixation activity interact and positively facilitate each other in individual cells. However, because CARD-FISH identification cannot detect genetic variability among cells of the same species, we cannot exclude genetic variability as a source for phenotypic heterogeneity in this natural population. Our study demonstrates the technical feasibility of measuring phenotypic heterogeneity in a rare bacterial species in its natural habitat, thus opening the door to study the occurrence and relevance of phenotypic heterogeneity in nature. PMID:25932020
Best, Katharine; Oakes, Theres; Heather, James M.; Shawe-Taylor, John; Chain, Benny
2015-01-01
The polymerase chain reaction (PCR) is one of the most widely used techniques in molecular biology. In combination with High Throughput Sequencing (HTS), PCR is widely used to quantify transcript abundance for RNA-seq, and in the context of analysis of T and B cell receptor repertoires. In this study, we combine DNA barcoding with HTS to quantify PCR output from individual target molecules. We develop computational tools that simulate both the PCR branching process itself, and the subsequent subsampling which typically occurs during HTS sequencing. We explore the influence of different types of heterogeneity on sequencing output, and compare them to experimental results where the efficiency of amplification is measured by barcodes uniquely identifying each molecule of starting template. Our results demonstrate that the PCR process introduces substantial amplification heterogeneity, independent of primer sequence and bulk experimental conditions. This heterogeneity can be attributed both to inherited differences between different template DNA molecules, and the inherent stochasticity of the PCR process. The results demonstrate that PCR heterogeneity arises even when reaction and substrate conditions are kept as constant as possible, and therefore single molecule barcoding is essential in order to derive reproducible quantitative results from any protocol combining PCR with HTS. PMID:26459131
Axelrod, David E; Vedula, Sudeepti; Obaniyi, James
2017-05-01
The effectiveness of cancer chemotherapy is limited by intra-tumor heterogeneity, the emergence of spontaneous and induced drug-resistant mutant subclones, and the maximum dose to which normal tissues can be exposed without adverse side effects. The goal of this project was to determine if intermittent schedules of the maximum dose that allows colon crypt maintenance could overcome these limitations, specifically by eliminating mixtures of drug-resistant mutants from heterogeneous early colon adenomas while maintaining colon crypt function. A computer model of cell dynamics in human colon crypts was calibrated with measurements of human biopsy specimens. The model allowed simulation of continuous and intermittent dose schedules of a cytotoxic chemotherapeutic drug, as well as the drug's effect on the elimination of mutant cells and the maintenance of crypt function. Colon crypts can tolerate a tenfold greater intermittent dose than constant dose. This allows elimination of a mixture of relatively drug-sensitive and drug-resistant mutant subclones from heterogeneous colon crypts. Mutants can be eliminated whether they arise spontaneously or are induced by the cytotoxic drug. An intermittent dose, at the maximum that allows colon crypt maintenance, can be effective in eliminating a heterogeneous mixture of mutant subclones before they fill the crypt and form an adenoma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haack, Jeffrey; Shohet, Gil
2016-12-02
The software implements a heterogeneous multiscale method (HMM), which involves solving a classical molecular dynamics (MD) problem and then computes the entropy production in order to compute the relaxation times towards equilibrium for use in a Bhatnagar-Gross-Krook (BGK) solver.
Measuring the effects of heterogeneity on distributed systems
NASA Technical Reports Server (NTRS)
El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi
1991-01-01
Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.
Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L; Choset, Howie; Umbanhowar, Paul B; Goldman, Daniel I
2016-11-01
Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.
NASA Astrophysics Data System (ADS)
Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L.; Choset, Howie; Umbanhowar, Paul B.; Goldman, Daniel I.
2016-11-01
Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems—a ‘locomotion robophysics’—which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others’ research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.
DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.
2017-12-01
DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.
Characterizing heterogeneous cellular responses to perturbations.
Slack, Michael D; Martinez, Elisabeth D; Wu, Lani F; Altschuler, Steven J
2008-12-09
Cellular populations have been widely observed to respond heterogeneously to perturbation. However, interpreting the observed heterogeneity is an extremely challenging problem because of the complexity of possible cellular phenotypes, the large dimension of potential perturbations, and the lack of methods for separating meaningful biological information from noise. Here, we develop an image-based approach to characterize cellular phenotypes based on patterns of signaling marker colocalization. Heterogeneous cellular populations are characterized as mixtures of phenotypically distinct subpopulations, and responses to perturbations are summarized succinctly as probabilistic redistributions of these mixtures. We apply our method to characterize the heterogeneous responses of cancer cells to a panel of drugs. We find that cells treated with drugs of (dis-)similar mechanism exhibit (dis-)similar patterns of heterogeneity. Despite the observed phenotypic diversity of cells observed within our data, low-complexity models of heterogeneity were sufficient to distinguish most classes of drug mechanism. Our approach offers a computational framework for assessing the complexity of cellular heterogeneity, investigating the degree to which perturbations induce redistributions of a limited, but nontrivial, repertoire of underlying states and revealing functional significance contained within distinct patterns of heterogeneous responses.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response Variance in Functional Maps: Neural Darwinism Revisited
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population. PMID:23874733
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
Buettner, Florian; Natarajan, Kedar N; Casale, F Paolo; Proserpio, Valentina; Scialdone, Antonio; Theis, Fabian J; Teichmann, Sarah A; Marioni, John C; Stegle, Oliver
2015-02-01
Recent technical developments have enabled the transcriptomes of hundreds of cells to be assayed in an unbiased manner, opening up the possibility that new subpopulations of cells can be found. However, the effects of potential confounding factors, such as the cell cycle, on the heterogeneity of gene expression and therefore on the ability to robustly identify subpopulations remain unclear. We present and validate a computational approach that uses latent variable models to account for such hidden factors. We show that our single-cell latent variable model (scLVM) allows the identification of otherwise undetectable subpopulations of cells that correspond to different stages during the differentiation of naive T cells into T helper 2 cells. Our approach can be used not only to identify cellular subpopulations but also to tease apart different sources of gene expression heterogeneity in single-cell transcriptomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
Intelligent On-Board Processing in the Sensor Web
NASA Astrophysics Data System (ADS)
Tanner, S.
2005-12-01
Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by sensors and their on-board processes. The two primary research areas for this project are (1) the on-board processing and communications framework itself, and (2) data mining algorithms targeted to the needs and constraints of the on-board environment. The team is leveraging its experience in on-board processing, data mining, custom data processing, and sensor network design. Several unique UAH-developed technologies are employed in the AODP project, including EVE, an EnVironmEnt for on-board processing, and the data mining tools included in the Algorithm Development and Mining (ADaM) toolkit.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Solving global shallow water equations on heterogeneous supercomputers
Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen
2017-01-01
The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Adaptations in Electronic Structure Calculations in Heterogeneous Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamudupula, Sai
Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.
2017-01-01
Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Homogeneous v. Heterogeneous: Is Tracking a Barrier to Equity?
ERIC Educational Resources Information Center
Polansky, Harvey B.
1995-01-01
Tracking has contributed considerably to the basic inequality of funding among American schools. To move to a heterogenous environment, districts must understand the concept of resource and program equity, commit to a planning process that allocates time and resources, provide ongoing inservice, downplay standardized test results, and phase-in…
Heterogeneity of Student Perceptions of the Classroom Climate: A Latent Profile Approach
ERIC Educational Resources Information Center
Schenke, Katerina; Ruzek, Erik; Lam, Arena C.; Karabenick, Stuart A.; Eccles, Jacquelynne S.
2017-01-01
Student perceptions are a pivotal point of measurement for understanding why classroom learning environments are effective. Yet there is some evidence that student perceptions cannot be reliably aggregated at the classroom level and, instead, could represent idiosyncratic experiences of students. The present study examines whether heterogeneity in…
Interoperability through standardization: Electronic mail, and X Window systems
NASA Technical Reports Server (NTRS)
Amin, Ashok T.
1993-01-01
Since the introduction of computing machines, there has been continual advances in computer and communication technologies and approaching limits. The user interface has evolved from a row of switches, character based interface using teletype terminals and then video terminals, to present day graphical user interface. It is expected that next significant advances will come in the availability of services, such as electronic mail and directory services, as the standards for applications are developed and in the 'easy to use' interfaces, such as Graphical User Interface for example Window and X Window, which are being standardized. Various proprietary electronic mail (email) systems are in use within organizations at each center of NASA. Each system provides email services to users within an organization, however the support for email services across organizations and across centers exists at centers to a varying degree and is often easy to use. A recent NASA email initiative is intended 'to provide a simple way to send email across organizational boundaries without disruption of installed base.' The initiative calls for integration of existing organizational email systems through gateways connected by a message switch, supporting X.400 and SMTP protocols, to create a NASA wide email system and for implementation of NASA wide email directory services based on OSI standard X.500. A brief overview of MSFC efforts as a part of this initiative are described. Window based graphical user interfaces make computers easy to use. X window protocol has been developed at Massachusetts Institute of Technology in 1984/1985 to provide uniform window based interface in a distributed computing environment with heterogenous computers. It has since become a standard supported by a number of major manufacturers. Z Windows systems, terminals and workstations, and X Window applications are becoming available. However impact of its use in the Local Area Network environment on the network traffic are not well understood. It is expected that the use of X Windows systems will increase at MSFC especially for Unix based systems. An overview of X Window protocol is presented and its impact on the network traffic is examined. It is proposed that an analytical model of X Window systems in the network environment be developed and validated through the use of measurements to generate application and user profiles.
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
CAVEman: Standardized anatomical context for biomedical data mapping.
Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Wat, Stephen; Hallgrímsson, Benedikt; Dong, Xiaoli; Shu, Xueling; Stromer, Julie N; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W
2008-01-01
The authors have created a software system called the CAVEman, for the visual integration and exploration of heterogeneous anatomical and biomedical data. The CAVEman can be applied for both education and research tasks. The main component of the system is a three-dimensional digital atlas of the adult male human anatomy, structured according to the nomenclature of Terminologia Anatomica. The underlying data-indexing mechanism uses standard ontologies to map a range of biomedical data types onto the atlas. The CAVEman system is now used to visualize genetic processes in the context of the human anatomy and to facilitate visual exploration of the data. Through the use of Javatrade mark software, the atlas-based system is portable to virtually any computer environment, including personal computers and workstations. Existing Java tools for biomedical data analysis have been incorporated into the system. The affordability of virtual-reality installations has increased dramatically over the last several years. This creates new opportunities for educational scenarios that model important processes in a patient's body, including gene expression patterns, metabolic activity, the effects of interventions such as drug treatments, and eventually surgical simulations.
Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.
Stockton, David B; Santamaria, Fidel
2017-10-01
We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.